-
MAEB: Massive Audio Embedding Benchmark
Paper • 2602.16008 • Published • 21 -
HUME: Measuring the Human-Model Performance Gap in Text Embedding Task
Paper • 2510.10062 • Published • 10 -
MMTEB: Massive Multilingual Text Embedding Benchmark
Paper • 2502.13595 • Published • 45 -
MIEB: Massive Image Embedding Benchmark
Paper • 2504.10471 • Published • 21
Collections
Discover the best community collections!
Collections including paper arxiv:2210.07316
-
Will we run out of data? An analysis of the limits of scaling datasets in Machine Learning
Paper • 2211.04325 • Published • 1 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 26 -
On the Opportunities and Risks of Foundation Models
Paper • 2108.07258 • Published • 2 -
Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks
Paper • 2204.07705 • Published • 2
-
Large Language Model Alignment: A Survey
Paper • 2309.15025 • Published • 2 -
Aligning Large Language Models with Human: A Survey
Paper • 2307.12966 • Published • 1 -
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Paper • 2305.18290 • Published • 64 -
SteerLM: Attribute Conditioned SFT as an (User-Steerable) Alternative to RLHF
Paper • 2310.05344 • Published • 1
-
MAEB: Massive Audio Embedding Benchmark
Paper • 2602.16008 • Published • 21 -
HUME: Measuring the Human-Model Performance Gap in Text Embedding Task
Paper • 2510.10062 • Published • 10 -
MMTEB: Massive Multilingual Text Embedding Benchmark
Paper • 2502.13595 • Published • 45 -
MIEB: Massive Image Embedding Benchmark
Paper • 2504.10471 • Published • 21
-
Will we run out of data? An analysis of the limits of scaling datasets in Machine Learning
Paper • 2211.04325 • Published • 1 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 26 -
On the Opportunities and Risks of Foundation Models
Paper • 2108.07258 • Published • 2 -
Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks
Paper • 2204.07705 • Published • 2
-
Large Language Model Alignment: A Survey
Paper • 2309.15025 • Published • 2 -
Aligning Large Language Models with Human: A Survey
Paper • 2307.12966 • Published • 1 -
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Paper • 2305.18290 • Published • 64 -
SteerLM: Attribute Conditioned SFT as an (User-Steerable) Alternative to RLHF
Paper • 2310.05344 • Published • 1