Aligning the True Semantics: Constrained Decoupling and Distribution Sampling for Cross-Modal Alignment
About
Cross-modal alignment is a crucial task in multimodal learning aimed at achieving semantic consistency between vision and language. This requires that image-text pairs exhibit similar semantics. Traditional algorithms pursue embedding consistency to achieve semantic consistency, ignoring the non-semantic information present in the embedding. An intuitive approach is to decouple the embeddings into semantic and modality components, aligning only the semantic component. However, this introduces two main challenges: (1) There is no established standard for distinguishing semantic and modal information. (2) The modality gap can cause semantic alignment deviation or information loss. To align the true semantics, we propose a novel cross-modal alignment algorithm via \textbf{C}onstrained \textbf{D}ecoupling and \textbf{D}istribution \textbf{S}ampling (CDDS). Specifically, (1) A dual-path UNet is introduced to adaptively decouple the embeddings, applying multiple constraints to ensure effective separation. (2) A distribution sampling method is proposed to bridge the modality gap, ensuring the rationality of the alignment process. Extensive experiments on various benchmarks and model backbones demonstrate the superiority of CDDS, outperforming state-of-the-art methods by 6.6\% to 14.2\%.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-to-Image Retrieval | Flickr30k (test) | Recall@185.3 | 445 | |
| Image-to-Text Retrieval | Flickr30k (test) | R@195.2 | 392 | |
| Image-to-Text Retrieval | MS-COCO 5K (test) | R@173.3 | 320 | |
| Text-to-Image Retrieval | MS-COCO 5K (test) | R@157.6 | 244 | |
| Image-Text Retrieval | Flickr30k (test) | R@1 (Img->Txt)86.8 | 45 | |
| Image-Text Retrieval | MS-COCO 5K (test) | RSum (Composite Score)472.1 | 28 | |
| Image-Text Retrieval | MS-COCO 1K (test) | Image-to-Text R@184.9 | 24 |