Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MCSE: Multimodal Contrastive Learning of Sentence Embeddings

About

Learning semantically meaningful sentence embeddings is an open problem in natural language processing. In this work, we propose a sentence embedding learning approach that exploits both visual and textual information via a multimodal contrastive objective. Through experiments on a variety of semantic textual similarity tasks, we demonstrate that our approach consistently improves the performance across various datasets and pre-trained encoders. In particular, combining a small amount of multimodal data with a large text-only corpus, we improve the state-of-the-art average Spearman's correlation by 1.7%. By analyzing the properties of the textual embedding space, we show that our model excels in aligning semantically similar sentences, providing an explanation for its improved performance.

Miaoran Zhang, Marius Mosbach, David Ifeoluwa Adelani, Michael A. Hedderich, Dietrich Klakow• 2022

Related benchmarks

TaskDatasetResultRank
Text-to-Image RetrievalFlickr30k (test)
Recall@122.5
445
Image-to-Text RetrievalFlickr30k (test)
R@116.7
392
Semantic Textual SimilaritySTS tasks (STS12, STS13, STS14, STS15, STS16, STS-B, SICK-R)
STS12 Score71.7
195
Sentence Embedding EvaluationMTEB (test)
Classification Score63.2
55
Transfer LearningSentEval Transfer Learning Tasks (test)
MR82.82
52
Showing 5 of 5 rows

Other info

Follow for update