Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MM-Embed: Universal Multimodal Retrieval with Multimodal LLMs

About

State-of-the-art retrieval models typically address a straightforward search scenario, in which retrieval tasks are fixed (e.g., finding a passage to answer a specific question) and only a single modality is supported for both queries and retrieved results. This paper introduces techniques for advancing information retrieval with multimodal large language models (MLLMs), enabling a broader search scenario, termed universal multimodal retrieval, where multiple modalities and diverse retrieval tasks are accommodated. To this end, we first study fine-tuning an MLLM as a bi-encoder retriever on 10 datasets with 16 retrieval tasks. Our empirical results show that the fine-tuned MLLM retriever is capable of understanding challenging queries, composed of both text and image, but it underperforms compared to a smaller CLIP retriever in cross-modal retrieval tasks due to the modality bias exhibited by MLLMs. To address the issue, we propose modality-aware hard negative mining to mitigate the modality bias exhibited by MLLM retrievers. Second, we propose continuously fine-tuning the universal multimodal retriever to enhance its text retrieval capability while preserving multimodal retrieval capability. As a result, our model, MM-Embed, achieves state-of-the-art performance on the multimodal retrieval benchmark M-BEIR, which spans multiple domains and tasks, while also surpassing the state-of-the-art text retrieval model, NV-Embed-v1, on the MTEB retrieval benchmark. We also explore prompting the off-the-shelf MLLMs as zero-shot rerankers to refine the ranking of the candidates from the multimodal retriever. We find that, through prompt-and-reranking, MLLMs can further improve multimodal retrieval when the user queries (e.g., text-image composed queries) are more complex and challenging to understand. These findings also pave the way for advancing universal multimodal retrieval in the future.

Sheng-Chieh Lin, Chankyu Lee, Mohammad Shoeybi, Jimmy Lin, Bryan Catanzaro, Wei Ping• 2024

Related benchmarks

TaskDatasetResultRank
Composed Image RetrievalCIRCO (test)--
234
Composed Image RetrievalCIRCO
mAP@535.5
63
Composed Image RetrievalCIRCO 1.0 (test)
mAP@532.3
36
Multi-modal RetrievalM-BEIR (test)
Average Recall52.7
36
Handwriting RetrievalHandwriting In-Domain Set
Accuracy@153.05
30
Handwriting RetrievalHandwriting Spanish synthetic disjoint fonts (Out-of-Domain (OOD))
Top-1 Accuracy29.77
30
Visual Information RetrievalMVRB
SR25.86
16
Multimodal RetrievalMT-FIQ
Recall@559
15
Multi-modal RetrievalM-BEIR Global Pool 1.0 (test)
VisualNews R@5 (qt->ci)41
11
Contextual Image RetrievalVIST
R@12.57e+3
10
Showing 10 of 12 rows

Other info

Follow for update