Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Multi-modal Alignment using Representation Codebook

About

Aligning signals from different modalities is an important step in vision-language representation learning as it affects the performance of later stages such as cross-modality fusion. Since image and text typically reside in different regions of the feature space, directly aligning them at instance level is challenging especially when features are still evolving during training. In this paper, we propose to align at a higher and more stable level using cluster representation. Specifically, we treat image and text as two "views" of the same entity, and encode them into a joint vision-language coding space spanned by a dictionary of cluster centers (codebook). We contrast positive and negative samples via their cluster assignments while simultaneously optimizing the cluster centers. To further smooth out the learning process, we adopt a teacher-student distillation paradigm, where the momentum teacher of one view guides the student learning of the other. We evaluated our approach on common vision language benchmarks and obtain new SoTA on zero-shot cross modality retrieval while being competitive on various other transfer tasks.

Jiali Duan, Liqun Chen, Son Tran, Jinyu Yang, Yi Xu, Belinda Zeng, Trishul Chilimbi• 2022

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2 (test-std)
Accuracy73.29
466
Visual Question AnsweringVQA 2.0 (test-dev)
Accuracy73.15
337
Natural Language Visual ReasoningNLVR2 (test-p)
Accuracy80.8
327
Natural Language Visual ReasoningNLVR2 (dev)
Accuracy80.5
288
Visual EntailmentSNLI-VE (test)
Overall Accuracy80.4
197
Visual EntailmentSNLI-VE (val)
Overall Accuracy80.5
109
Text-to-Image RetrievalFlickr30k (1K)
R@179.7
48
Image-to-Text RetrievalMS COCO 5K
R@10.715
46
Text-to-Image RetrievalMS COCO 5K
R@153.9
39
Image-to-Text RetrievalFlickr30k (1K)
R@191.7
30
Showing 10 of 16 rows

Other info

Follow for update