Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

On the Language Encoder of Contrastive Cross-modal Models

About

Contrastive cross-modal models such as CLIP and CLAP aid various vision-language (VL) and audio-language (AL) tasks. However, there has been limited investigation of and improvement in their language encoder, which is the central component of encoding natural language descriptions of image/audio into vector representations. We extensively evaluate how unsupervised and supervised sentence embedding training affect language encoder quality and cross-modal task performance. In VL pretraining, we found that sentence embedding training language encoder quality and aids in cross-modal tasks, improving contrastive VL models such as CyCLIP. In contrast, AL pretraining benefits less from sentence embedding training, which may result from the limited amount of pretraining data. We analyze the representation spaces to understand the strengths of sentence embedding training, and find that it improves text-space uniformity, at the cost of decreased cross-modal alignment.

Mengjie Zhao, Junya Ono, Zhi Zhong, Chieh-Hsin Lai, Yuhta Takida, Naoki Murata, Wei-Hsiang Liao, Takashi Shibuya, Hiromi Wakaki, Yuki Mitsufuji• 2023

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet-A (test)
Top-1 Acc5.19
154
Image RetrievalFlickr30K
R@131.74
144
Image ClassificationImageNet-Sketch (test)
Top-1 Acc0.1285
132
Image ClassificationImageNet-R (test)--
105
Text RetrievalFlickr30K
R@140
75
Image ClassificationImageNet V2 (val)
Top-1 Accuracy18.68
43
Audio RetrievalAudioCaps
R@142.73
42
Audio ClassificationUS8K (test)
R@1 Accuracy0.7027
41
Text RetrievalMS-COCO
R@121.3
37
Image RetrievalMS-COCO
R@537.75
36
Showing 10 of 19 rows

Other info

Follow for update