Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Non-Linguistic Supervision for Contrastive Learning of Sentence Embeddings

About

Semantic representation learning for sentences is an important and well-studied problem in NLP. The current trend for this task involves training a Transformer-based sentence encoder through a contrastive objective with text, i.e., clustering sentences with semantically similar meanings and scattering others. In this work, we find the performance of Transformer models as sentence encoders can be improved by training with multi-modal multi-task losses, using unpaired examples from another modality (e.g., sentences and unrelated image/audio data). In particular, besides learning by the contrastive loss on text, our model clusters examples from a non-linguistic domain (e.g., visual/audio) with a similar contrastive loss at the same time. The reliance of our framework on unpaired non-linguistic data makes it language-agnostic, enabling it to be widely applicable beyond English NLP. Experiments on 7 semantic textual similarity benchmarks reveal that models trained with the additional non-linguistic (images/audio) contrastive objective lead to higher quality sentence embeddings. This indicates that Transformer models are able to generalize better by doing a similar task (i.e., clustering) with unpaired examples from different modalities in a multi-task fashion.

Yiren Jian, Chongyang Gao, Soroush Vosoughi• 2022

Related benchmarks

TaskDatasetResultRank
Semantic Textual SimilaritySTS tasks (STS12, STS13, STS14, STS15, STS16, STS-B, SICK-R) various (test)
STS12 Score73.09
393
Sentence Classification Transfer TasksSentEval transfer tasks
Average Accuracy0.8686
99
Semantic Textual SimilarityEnglish STS
Average Score81.86
68
Semantic Textual SimilarityMultilingual STS-B (val)
Spearman Correlation77.48
8
Showing 4 of 4 rows

Other info

Code

Follow for update