Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Extending Multi-modal Contrastive Representations

About

Multi-modal contrastive representation (MCR) of more than three modalities is critical in multi-modal learning. Although recent methods showcase impressive achievements, the high dependence on large-scale, high-quality paired data and the expensive training costs limit their further development. Inspired by recent C-MCR, this paper proposes Extending Multimodal Contrastive Representation (Ex-MCR), a training-efficient and paired-data-free method to flexibly learn unified contrastive representation space for more than three modalities by integrating the knowledge of existing MCR spaces. Specifically, Ex-MCR aligns multiple existing MCRs into the same based MCR, which can effectively preserve the original semantic alignment of the based MCR. Besides, we comprehensively enhance the entire learning pipeline for aligning MCR spaces from the perspectives of training data, architecture, and learning objectives. With the preserved original modality alignment and the enhanced space alignment, Ex-MCR shows superior representation learning performance and excellent modality extensibility. To demonstrate the effectiveness of Ex-MCR, we align the MCR spaces of CLAP (audio-text) and ULIP (3D-vision) into the CLIP (vision-text), leveraging the overlapping text and image modality, respectively. Remarkably, without using any paired data, Ex-MCR learns a 3D-image-text-audio unified contrastive representation, and it achieves state-of-the-art performance on audio-visual, 3D-image, audio-text, visual-text retrieval, and 3D object classification tasks. More importantly, extensive qualitative results further demonstrate the emergent semantic alignment between the extended modalities (e.g., audio and 3D), which highlights the great potential of modality extensibility.

Zehan Wang, Ziang Zhang, Luping Liu, Yang Zhao, Haifeng Huang, Tao Jin, Zhou Zhao• 2023

Related benchmarks

TaskDatasetResultRank
Image-Text RetrievalCOCO (val)
R@132.58
43
3D-Image RetrievalObjaverse LVIS
R@12.54
8
Audio-Text RetrievalAudioCaps (val)
mAP11.19
5
Emergent modality binding (au -> te -> vi)MSRVTT (test)
mAP9
5
Emergent modality binding (au -> te -> vi)AVE (test)
mAP0.071
5
Emergent modality binding (vi -> te -> au)MSRVTT (test)
mAP10.2
5
Emergent modality binding (vi -> te -> au)AVE (test)
mAP5.8
5
Audio-Image RetrievalFlickrNet (test)
mAP4.94
4
Audio-Image RetrievalAVE (test)
mAP4.46
4
3D Object ClassificationModelNet40 (val)
Top-1 Accuracy0.6653
4
Showing 10 of 11 rows

Other info

Code

Follow for update