Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings

About

We propose DiffCSE, an unsupervised contrastive learning framework for learning sentence embeddings. DiffCSE learns sentence embeddings that are sensitive to the difference between the original sentence and an edited sentence, where the edited sentence is obtained by stochastically masking out the original sentence and then sampling from a masked language model. We show that DiffSCE is an instance of equivariant contrastive learning (Dangovski et al., 2021), which generalizes contrastive learning and learns representations that are insensitive to certain types of augmentations and sensitive to other "harmful" types of augmentations. Our experiments show that DiffCSE achieves state-of-the-art results among unsupervised sentence representation learning methods, outperforming unsupervised SimCSE by 2.3 absolute points on semantic textual similarity tasks.

Yung-Sung Chuang, Rumen Dangovski, Hongyin Luo, Yang Zhang, Shiyu Chang, Marin Solja\v{c}i\'c, Shang-Wen Li, Wen-tau Yih, Yoon Kim, James Glass• 2022

Related benchmarks

TaskDatasetResultRank
Semantic Textual SimilaritySTS tasks (STS12, STS13, STS14, STS15, STS16, STS-B, SICK-R) various (test)
STS12 Score72.28
412
Semantic Textual SimilaritySTS tasks (STS12, STS13, STS14, STS15, STS16, STS-B, SICK-R)
STS12 Score72.28
195
Sentence Classification Transfer TasksSentEval transfer tasks
Average Accuracy0.8704
99
RetrievalMS MARCO (dev)
MRR@100.3202
84
Semantic Textual SimilaritySTS (Semantic Textual Similarity) 2012-2016 (test)
STS-12 Score72.28
57
Sentence Embedding EvaluationSentEval
Average Score (Avg)87.04
44
Multiple-choice reading comprehensionViMMRC 2.0
Accuracy41.11
29
Natural Language InferenceViNLI
Accuracy58.35
17
Dense RetrievalBEIR zero-shot
TREC-COVID49.2
13
Information RetrievalViWikiFC
Top-1 Accuracy44.07
12
Showing 10 of 24 rows

Other info

Code

Follow for update