Abdur-Rahman Butler
AI & ML interests
Recent Activity
Organizations
Today, the Ubiquitous Knowledge Processing (UKP) Lab is transferring the project to Hugging Face. Sentence Transformers will remain a community-driven, open-source project, with the same open-source license (Apache 2.0) as before. Contributions from researchers, developers, and enthusiasts are welcome and encouraged. The project will continue to prioritize transparency, collaboration, and broad accessibility.
Read our full announcement for more details and quotes from UKP and Hugging Face leadership: https://huggingface.co/blog/sentence-transformers-joins-hf
We see an increasing wish from companies to move from large LLM APIs to local models for better control and privacy, reflected in the library's growth: in just the last 30 days, Sentence Transformer models have been downloaded >270 million times, second only to transformers.
I would like to thank the UKP Lab, and especially Nils Reimers and Iryna Gurevych, both for their dedication to the project and for their trust in myself, both now and two years ago. Back then, neither of you knew me well, yet you trusted me to take the project to new heights. That choice ended up being very valuable for the embedding & Information Retrieval community, and I think this choice of granting Hugging Face stewardship will be similarly successful.
I'm very excited about the future of the project, and for the world of embeddings and retrieval at large!
Australian-made LLM beats OpenAI and Google at legal retrieval
Australian-made LLM beats OpenAI and Google at legal retrieval
Introducing MTEB v2: Evaluation of embedding and retrieval systems for more than just text
How I Built Lightning-Fast Vector Search for Legal Documents
This model is the product of quite literally months of painstaking work alongside @abdurrahmanbutler collecting, cleaning, and processing terabytes of data as well as coming up with novel improvements to the standard embedder training recipe to push the limits of what's possible.
Kanon 2 Embedder is my most advanced model to date. On MLEB, it benchmarks as 9% more accurate than OpenAI's best embedding model and 30% faster.
Even when truncated from 1,792 to 768 dimensions, Kanon 2 Embedder continues to hold the number one spot on MLEB.
Importantly, Kanon 2 Embedder is also privacy and security friendly โ unlike Voyage, Cohere and Jina, none of your data is used to train our models by default.
Kanon 2 Embedder can also be self-hosted for enterprises with heightened security or reliability requirements.
You can read the full announcement on our blog to learn how we did it and how you can get started using Kanon 2 Embedder to embed your own legal documents: https://isaacus.com/blog/introducing-kanon-2-embedder
We invite scrutiny and feedback of our benchmark.
Obviously this project was inspired by the commendable work of the team behind RTEB and MTEB.
Rather than having a legal split of a multilingual benchmark, we think it makes the most sense to have full domain coverage in a single language first. In theory, someone could do, for instance, a MLEB-F (French) and then use a mix of French, Belgian, Swiss, etc... legal documents to get full spectrum coverage in the French language.
If you are interested in doing something like that, reach out to us and we'd love to exchange notes and guidance :).
Anyways, happy benchmarking!
@fzliu @KennethEnevoldsen @Samoed @isaacchung @tomaarsen @fzoll @Muennighoff @nouamanetazi @loicmagne @nreimers @clem
๐๐ง๐ญ๐ซ๐จ๐๐ฎ๐๐ข๐ง๐ ๐๐๐๐ โ ๐ญ๐ก๐ ๐๐๐ฌ๐ฌ๐ข๐ฏ๐ ๐๐๐ ๐๐ฅ ๐๐ฆ๐๐๐๐๐ข๐ง๐ ๐๐๐ง๐๐ก๐ฆ๐๐ซ๐ค.
A suite of 10 high-quality English legal IR datasets, designed by legal experts to set a new standard for comparing embedding models.
Whether youโre exploring legal RAG on your home computer, or running enterprise-scale retrieval, apples-to-apples evaluation is crucial. Thatโs why weโve open-sourced everything - including our 7 brand-new, hand-crafted retrieval datasets. All of these datasets are now live on Hugging Face.
Any guesses which embedding model leads on legal retrieval?
๐๐ข๐ง๐ญ: itโs not OpenAI or Google - they place 7th and 9th on our leaderboard.
To do well on MLEB, embedding models must demonstrate both extensive legal domain knowledge and strong legal reasoning skills.
https://huggingface.co/blog/isaacus/introducing-mleb