Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Towards Uniformity and Alignment for Multimodal Representation Learning

About

Multimodal representation learning aims to construct a shared embedding space in which heterogeneous modalities are semantically aligned. Despite strong empirical results, InfoNCE-based objectives introduce inherent conflicts that yield distribution gaps across modalities. In this work, we identify two conflicts in the multimodal regime, both exacerbated as the number of modalities increases: (i) an alignment-uniformity conflict, whereby the repulsion of uniformity undermines pairwise alignment, and (ii) an intra-alignment conflict, where aligning multiple modalities induces competing alignment directions. To address these issues, we propose a principled decoupling of alignment and uniformity for multimodal representations, providing a conflict-free recipe for multimodal learning that simultaneously supports discriminative and generative use cases without task-specific modules. We then provide a theoretical guarantee that our method acts as an efficient proxy for a global H\"older divergence over multiple modality distributions, and thus reduces the distribution gap among modalities. Extensive experiments on retrieval and UnCLIP-style generation demonstrate consistent gains.

Wenzhe Yin, Pan Zhou, Zehao Xiao, Jie Liu, Shujian Yu, Jan-Jakob Sonke, Efstratios Gavves• 2026

Related benchmarks

TaskDatasetResultRank
Text-to-Video RetrievalDiDeMo
R@10.582
360
Text-to-Video RetrievalMSR-VTT
Recall@158.7
313
Text-to-Video RetrievalActivityNet
R@10.594
197
Video-to-Text retrievalMSR-VTT
Recall@154.6
157
Video-to-Text retrievalDiDeMo
R@151.9
108
Video-to-Text retrievalActivityNet
R@10.525
99
Cross-modal GenerationVGGSound
Average Score48.09
9
Showing 7 of 7 rows

Other info

Follow for update