Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Text and Code Embeddings by Contrastive Pre-Training

About

Text embeddings are useful features in many applications such as semantic search and computing text similarity. Previous work typically trains models customized for different use cases, varying in dataset choice, training objective and model architecture. In this work, we show that contrastive pre-training on unsupervised data at scale leads to high quality vector representations of text and code. The same unsupervised text embeddings that achieve new state-of-the-art results in linear-probe classification also display impressive semantic search capabilities and sometimes even perform competitively with fine-tuned models. On linear-probe classification accuracy averaging over 7 tasks, our best unsupervised model achieves a relative improvement of 4% and 1.8% over previous best unsupervised and supervised text embedding models respectively. The same text embeddings when evaluated on large-scale semantic search attains a relative improvement of 23.4%, 14.7%, and 10.6% over previous best unsupervised methods on MSMARCO, Natural Questions and TriviaQA benchmarks, respectively. Similarly to text embeddings, we train code embedding models on (text, code) pairs, obtaining a 20.8% relative improvement over prior best work on code search.

Arvind Neelakantan, Tao Xu, Raul Puri, Alec Radford, Jesse Michael Han, Jerry Tworek, Qiming Yuan, Nikolas Tezak, Jong Wook Kim, Chris Hallacy, Johannes Heidecke, Pranav Shyam, Boris Power, Tyna Eloundou Nekoul, Girish Sastry, Gretchen Krueger, David Schnurr, Felipe Petroski Such, Kenny Hsu, Madeleine Thompson, Tabarak Khan, Toki Sherbakov, Joanne Jang, Peter Welinder, Lilian Weng• 2022

Related benchmarks

TaskDatasetResultRank
RerankingMS MARCO (dev)
MRR@100.227
71
Semantic Textual SimilaritySTS-B
Spearman's Rho (x100)82.2
70
Information RetrievalBEIR
TREC-COVID0.649
59
Semantic Textual SimilaritySTS (Semantic Textual Similarity) 2012-2016 (test)
STS-12 Score73.7
57
Information RetrievalBEIR v1.0.0 (test)
ArguAna56.7
55
Passage RankingTREC DL 2019
NDCG@100.704
24
Information RetrievalBEIR v1 (test)
ArguAna49.2
22
Semantic Textual Similarity (STS)MTEB English 2023 (test)
BIO86.35
19
Passage retrievalMS MARCO (dev)
MRR@1034.4
17
Passage RankingTREC DL 2020
NDCG@100.676
16
Showing 10 of 22 rows

Other info

Follow for update