MIRe: Enhancing Multimodal Queries Representation via Fusion-Free Modality Interaction for Multimodal Retrieval
About
Recent multimodal retrieval methods have endowed text-based retrievers with multimodal capabilities by utilizing pre-training strategies for visual-text alignment. They often directly fuse the two modalities for cross-reference during the alignment to understand multimodal queries. However, existing methods often overlook crucial visual information due to a text-dominant issue, which overly depends on text-driven signals. In this paper, we introduce MIRe, a retrieval framework that achieves modality interaction without fusing textual features during the alignment. Our method allows the textual query to attend to visual embeddings while not feeding text-driven signals back into the visual representations. Additionally, we construct a pre-training dataset for multimodal query retrieval by transforming concise question-answer pairs into extended passages. Our experiments demonstrate that our pre-training strategy significantly enhances the understanding of multimodal queries, resulting in strong performance across four multimodal retrieval benchmarks under zero-shot settings. Moreover, our ablation studies and analyses explicitly verify the effectiveness of our framework in mitigating the text-dominant issue. Our code is publicly available: https://github.com/yeongjoonJu/MIRe
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Knowledge-based Visual Retrieval | OKVQA Google Search (test) | PR@584.66 | 16 | |
| Multi-modal knowledge base retrieval | ReMuQ (test) | R@594.4 | 14 | |
| Knowledge-based Visual Retrieval | ReMuQ 1.0 (test) | MRR@583.06 | 8 | |
| Knowledge-based Visual Retrieval | OKVQA WK11M (test) | MRR@551.15 | 6 | |
| Knowledge-based Visual Retrieval | E-VQA 1.0 (test) | MRR@50.4492 | 6 |