MM-Embed: Universal Multimodal Retrieval with Multimodal LLMs
About
State-of-the-art retrieval models typically address a straightforward search scenario, in which retrieval tasks are fixed (e.g., finding a passage to answer a specific question) and only a single modality is supported for both queries and retrieved results. This paper introduces techniques for advancing information retrieval with multimodal large language models (MLLMs), enabling a broader search scenario, termed universal multimodal retrieval, where multiple modalities and diverse retrieval tasks are accommodated. To this end, we first study fine-tuning an MLLM as a bi-encoder retriever on 10 datasets with 16 retrieval tasks. Our empirical results show that the fine-tuned MLLM retriever is capable of understanding challenging queries, composed of both text and image, but it underperforms compared to a smaller CLIP retriever in cross-modal retrieval tasks due to the modality bias exhibited by MLLMs. To address the issue, we propose modality-aware hard negative mining to mitigate the modality bias exhibited by MLLM retrievers. Second, we propose continuously fine-tuning the universal multimodal retriever to enhance its text retrieval capability while preserving multimodal retrieval capability. As a result, our model, MM-Embed, achieves state-of-the-art performance on the multimodal retrieval benchmark M-BEIR, which spans multiple domains and tasks, while also surpassing the state-of-the-art text retrieval model, NV-Embed-v1, on the MTEB retrieval benchmark. We also explore prompting the off-the-shelf MLLMs as zero-shot rerankers to refine the ranking of the candidates from the multimodal retriever. We find that, through prompt-and-reranking, MLLMs can further improve multimodal retrieval when the user queries (e.g., text-image composed queries) are more complex and challenging to understand. These findings also pave the way for advancing universal multimodal retrieval in the future.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Composed Image Retrieval | CIRCO (test) | -- | 234 | |
| Composed Image Retrieval | CIRCO | mAP@535.5 | 63 | |
| Composed Image Retrieval | CIRCO 1.0 (test) | mAP@532.3 | 36 | |
| Multi-modal Retrieval | M-BEIR (test) | Average Recall52.7 | 36 | |
| Handwriting Retrieval | Handwriting In-Domain Set | Accuracy@153.05 | 30 | |
| Handwriting Retrieval | Handwriting Spanish synthetic disjoint fonts (Out-of-Domain (OOD)) | Top-1 Accuracy29.77 | 30 | |
| Visual Information Retrieval | MVRB | SR25.86 | 16 | |
| Multimodal Retrieval | MT-FIQ | Recall@559 | 15 | |
| Multi-modal Retrieval | M-BEIR Global Pool 1.0 (test) | VisualNews R@5 (qt->ci)41 | 11 | |
| Contextual Image Retrieval | VIST | R@12.57e+3 | 10 |