Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Text Embeddings by Weakly-Supervised Contrastive Pre-training

About

This paper presents E5, a family of state-of-the-art text embeddings that transfer well to a wide range of tasks. The model is trained in a contrastive manner with weak supervision signals from our curated large-scale text pair dataset (called CCPairs). E5 can be readily used as a general-purpose embedding model for any tasks requiring a single-vector representation of texts such as retrieval, clustering, and classification, achieving strong performance in both zero-shot and fine-tuned settings. We conduct extensive evaluations on 56 datasets from the BEIR and MTEB benchmarks. For zero-shot settings, E5 is the first model that outperforms the strong BM25 baseline on the BEIR retrieval benchmark without using any labeled data. When fine-tuned, E5 obtains the best results on the MTEB benchmark, beating existing embedding models with 40x more parameters.

Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei• 2022

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy75
1891
Natural Language InferenceRTE
Accuracy68.5
448
Multi-hop Question Answering2WikiMultihopQA
EM46.33
387
Reading ComprehensionBoolQ
Accuracy71
279
Topic ClassificationAG-News
Accuracy90.6
225
Common Sense ReasoningCOPA
Accuracy84
197
Multi-hop Question AnsweringMuSiQue
EM21.39
185
Natural Language InferenceSNLI
Accuracy53.7
180
Sentiment AnalysisSST-2
Accuracy92.4
165
Multi-hop Question AnsweringBamboogle
Exact Match44
128
Showing 10 of 144 rows
...

Other info

Code

Follow for update