Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Understanding and Constructing Latent Modality Structures in Multi-modal Representation Learning

About

Contrastive loss has been increasingly used in learning representations from multiple modalities. In the limit, the nature of the contrastive loss encourages modalities to exactly match each other in the latent space. Yet it remains an open question how the modality alignment affects the downstream task performance. In this paper, based on an information-theoretic argument, we first prove that exact modality alignment is sub-optimal in general for downstream prediction tasks. Hence we advocate that the key of better performance lies in meaningful latent modality structures instead of perfect modality alignment. To this end, we propose three general approaches to construct latent modality structures. Specifically, we design 1) a deep feature separation loss for intra-modality regularization; 2) a Brownian-bridge loss for inter-modality regularization; and 3) a geometric consistency loss for both intra- and inter-modality regularization. Extensive experiments are conducted on two popular multi-modal representation learning frameworks: the CLIP-based two-tower model and the ALBEF-based fusion model. We test our model on a variety of tasks including zero/few-shot image classification, image-text retrieval, visual question answering, visual reasoning, and visual entailment. Our method achieves consistent improvements over existing methods, demonstrating the effectiveness and generalizability of our proposed approach on latent modality structure regularization.

Qian Jiang, Changyou Chen, Han Zhao, Liqun Chen, Qing Ping, Son Dinh Tran, Yi Xu, Belinda Zeng, Trishul Chilimbi• 2023

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet-1K
Top-1 Acc56.73
524
Image ClassificationFood-101--
494
Image ClassificationImageNet V2
Top-1 Acc17.37
487
Image ClassificationDTD--
487
Image ClassificationStanford Cars--
477
Visual Question AnsweringVQA v2 (test-std)
Accuracy74.36
466
Image ClassificationSUN397--
425
Image ClassificationImageNet-Sketch
Top-1 Accuracy10.9
360
Image ClassificationSVHN
Accuracy69.82
359
Visual Question AnsweringVQA 2.0 (test-dev)
Accuracy74.26
337
Showing 10 of 29 rows

Other info

Follow for update