Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

U-MARVEL: Unveiling Key Factors for Universal Multimodal Retrieval via Embedding Learning with MLLMs

About

Universal multimodal retrieval (UMR), which aims to address complex retrieval tasks where both queries and candidates span diverse modalities, has been significantly advanced by the emergence of MLLMs. While state-of-the-art MLLM-based methods in the literature predominantly adopt contrastive learning principles, they often differ in their specific training recipes. Despite their success, the mechanisms underlying their retrieval capabilities remain largely unexplored, potentially resulting in suboptimal performance and limited generalization ability. To address these issues, we present a comprehensive study aimed at uncovering the key factors that drive effective embedding learning for UMR using MLLMs. We begin by implementing a general MLLM-based embedding learning pipeline, and systematically analyze the primary contributors to high-performing universal retrieval systems. Based on this, we explore various aspects of the details in embedding generation and training strategies, including progressive transition, hard negative mining and re-ranker distillation. Notably, our findings reveal that often-overlooked factors can have a substantial impact on model performance. Building on these discoveries, we introduce a unified framework termed U-MARVEL (Universal MultimodAl RetrieVal via Embedding Learning), which outperforms state-of-the-art competitors on the M-BEIR benchmark by a large margin in supervised settings, and also exhibits strong zero-shot performance on several tasks such as composed image retrieval and text-to-video retrieval. These results underscore the generalization potential of our framework across various embedding-based retrieval tasks. Code is available at https://github.com/chaxjli/U-MARVEL

Xiaojie Li, Chu Li, Shi-Zhe Chen, Xi Chen• 2025

Related benchmarks

TaskDatasetResultRank
Text-to-Video RetrievalMSR-VTT
Recall@147.5
313
Text-to-Video RetrievalMSVD
R@154.6
218
Composed Image RetrievalCIRCO
mAP@546
63
Multi-modal RetrievalM-BEIR (test)
Average Recall64.8
36
Text-to-Image RetrievalFlickr
R@198.9
35
Image-to-Text RetrievalFlickr
R@197.7
25
Image-Text MatchingSugar-Crepe
Accuracy93.4
19
Image-Text MatchingCC-Neg
Accuracy86.1
17
Multimodal RetrievalMT-FIQ
Recall@566.3
15
Multi-modal RetrievalM-BEIR Global Pool 1.0 (test)
VisualNews R@5 (qt->ci)48.8
11
Showing 10 of 15 rows

Other info

Follow for update