Towards Uniformity and Alignment for Multimodal Representation Learning
About
Multimodal representation learning aims to construct a shared embedding space in which heterogeneous modalities are semantically aligned. Despite strong empirical results, InfoNCE-based objectives introduce inherent conflicts that yield distribution gaps across modalities. In this work, we identify two conflicts in the multimodal regime, both exacerbated as the number of modalities increases: (i) an alignment-uniformity conflict, whereby the repulsion of uniformity undermines pairwise alignment, and (ii) an intra-alignment conflict, where aligning multiple modalities induces competing alignment directions. To address these issues, we propose a principled decoupling of alignment and uniformity for multimodal representations, providing a conflict-free recipe for multimodal learning that simultaneously supports discriminative and generative use cases without task-specific modules. We then provide a theoretical guarantee that our method acts as an efficient proxy for a global H\"older divergence over multiple modality distributions, and thus reduces the distribution gap among modalities. Extensive experiments on retrieval and UnCLIP-style generation demonstrate consistent gains.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-to-Video Retrieval | DiDeMo | R@10.582 | 360 | |
| Text-to-Video Retrieval | MSR-VTT | Recall@158.7 | 313 | |
| Text-to-Video Retrieval | ActivityNet | R@10.594 | 197 | |
| Video-to-Text retrieval | MSR-VTT | Recall@154.6 | 157 | |
| Video-to-Text retrieval | DiDeMo | R@151.9 | 108 | |
| Video-to-Text retrieval | ActivityNet | R@10.525 | 99 | |
| Cross-modal Generation | VGGSound | Average Score48.09 | 9 |