view article Article Topic 33: Slim Attention, KArAt, XAttention and Multi-Token Attention Explained – What’s Really Changing in Transformers? Apr 4, 2025 • 16
view article Article Simplifying Alignment: From RLHF to Direct Preference Optimization (DPO) Jan 19, 2025 • 38
Running on CPU Upgrade 184 LLM Hallucination Leaderboard 🚀 184 View and filter LLM hallucination leaderboard
intfloat/multilingual-e5-large-instruct Feature Extraction • 0.6B • Updated Jul 10, 2025 • 1.36M • • 593