Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Contrastive Learning of Sentence Embeddings from Scratch

About

Contrastive learning has been the dominant approach to train state-of-the-art sentence embeddings. Previous studies have typically learned sentence embeddings either through the use of human-annotated natural language inference (NLI) data or via large-scale unlabeled sentences in an unsupervised manner. However, even in the case of unlabeled data, their acquisition presents challenges in certain domains due to various reasons. To address these issues, we present SynCSE, a contrastive learning framework that trains sentence embeddings with synthesized data. Specifically, we explore utilizing large language models to synthesize the required data samples for contrastive learning, including (1) producing positive and negative annotations given unlabeled sentences (SynCSE-partial), and (2) generating sentences along with their corresponding annotations from scratch (SynCSE-scratch). Experimental results on sentence similarity and reranking tasks indicate that both SynCSE-partial and SynCSE-scratch greatly outperform unsupervised baselines, and SynCSE-partial even achieves comparable performance to the supervised models in most settings.

Junlei Zhang, Zhenzhong Lan, Junxian He• 2023

Related benchmarks

TaskDatasetResultRank
Semantic Textual SimilaritySTS tasks (STS12, STS13, STS14, STS15, STS16, STS-B, SICK-R) various (test)
STS12 Score76.15
393
Sentence ClassificationSentEval Transfer tasks (test)
MR87.42
73
RerankingAskUbuntu (test)
MAP55.22
16
RerankingMindSmall (test)
MAP30.56
16
RerankingSCIDOCS (test)
MAP71.33
16
RerankingStackOverflow (test)
MAP40.06
16
Showing 6 of 6 rows

Other info

Follow for update