Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MIRe: Enhancing Multimodal Queries Representation via Fusion-Free Modality Interaction for Multimodal Retrieval

About

Recent multimodal retrieval methods have endowed text-based retrievers with multimodal capabilities by utilizing pre-training strategies for visual-text alignment. They often directly fuse the two modalities for cross-reference during the alignment to understand multimodal queries. However, existing methods often overlook crucial visual information due to a text-dominant issue, which overly depends on text-driven signals. In this paper, we introduce MIRe, a retrieval framework that achieves modality interaction without fusing textual features during the alignment. Our method allows the textual query to attend to visual embeddings while not feeding text-driven signals back into the visual representations. Additionally, we construct a pre-training dataset for multimodal query retrieval by transforming concise question-answer pairs into extended passages. Our experiments demonstrate that our pre-training strategy significantly enhances the understanding of multimodal queries, resulting in strong performance across four multimodal retrieval benchmarks under zero-shot settings. Moreover, our ablation studies and analyses explicitly verify the effectiveness of our framework in mitigating the text-dominant issue. Our code is publicly available: https://github.com/yeongjoonJu/MIRe

Yeong-Joon Ju, Ho-Joong Kim, Seong-Whan Lee• 2024

Related benchmarks

TaskDatasetResultRank
Knowledge-based Visual RetrievalOKVQA Google Search (test)
PR@584.66
16
Multi-modal knowledge base retrievalReMuQ (test)
R@594.4
14
Knowledge-based Visual RetrievalReMuQ 1.0 (test)
MRR@583.06
8
Knowledge-based Visual RetrievalOKVQA WK11M (test)
MRR@551.15
6
Knowledge-based Visual RetrievalE-VQA 1.0 (test)
MRR@50.4492
6
Showing 5 of 5 rows

Other info

Code

Follow for update