Maximal Matching Matters: Preventing Representation Collapse for Robust Cross-Modal Retrieval
About
Cross-modal image-text retrieval is challenging because of the diverse possible associations between content from different modalities. Traditional methods learn a single-vector embedding to represent semantics of each sample, but struggle to capture nuanced and diverse relationships that can exist across modalities. Set-based approaches, which represent each sample with multiple embeddings, offer a promising alternative, as they can capture richer and more diverse relationships. In this paper, we show that, despite their promise, these set-based representations continue to face issues including sparse supervision and set collapse, which limits their effectiveness. To address these challenges, we propose Maximal Pair Assignment Similarity to optimize one-to-one matching between embedding sets which preserve semantic diversity within the set. We also introduce two loss functions to further enhance the representations: Global Discriminative Loss to enhance distinction among embeddings, and Intra-Set Divergence Loss to prevent collapse within each set. Our method achieves state-of-the-art performance on MS-COCO and Flickr30k without relying on external data.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image-to-Text Retrieval | Flickr30K 1K (test) | R@184.2 | 491 | |
| Text-to-Image Retrieval | Flickr30k (test) | Recall@164.8 | 445 | |
| Text-to-Image Retrieval | Flickr30K 1K (test) | R@163.2 | 432 | |
| Image-to-Text Retrieval | Flickr30k (test) | R@186.2 | 392 | |
| Text-to-Image Retrieval | MSCOCO 5K (test) | R@144.2 | 308 | |
| Text-to-Image Retrieval | MSCOCO (1K test) | R@166.4 | 118 | |
| Image-to-Text Retrieval | MSCOCO (1K test) | R@183 | 96 | |
| Image-to-Text Retrieval | MSCOCO 5K (test) | R@163.3 | 64 | |
| Image-Text Retrieval | MSCOCO (5K) | -- | 24 |