Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

GME: Improving Universal Multimodal Retrieval by Multimodal LLMs

About

Universal Multimodal Retrieval (UMR) aims to enable search across various modalities using a unified model, where queries and candidates can consist of pure text, images, or a combination of both. Previous work has attempted to adopt multimodal large language models (MLLMs) to realize UMR using only text data. However, our preliminary experiments demonstrate that more diverse multimodal training data can further unlock the potential of MLLMs. Despite its effectiveness, the existing multimodal training data is highly imbalanced in terms of modality, which motivates us to develop a training data synthesis pipeline and construct a large-scale, high-quality fused-modal training dataset. Based on the synthetic training data, we develop the General Multimodal Embedder (GME), an MLLM-based dense retriever designed for UMR. Furthermore, we construct a comprehensive UMR Benchmark (UMRB) to evaluate the effectiveness of our approach. Experimental results show that our method achieves state-of-the-art performance among existing UMR methods. Last, we provide in-depth analyses of model scaling and training strategies, and perform ablation studies on both the model and synthetic data.

Xin Zhang, Yanzhao Zhang, Wen Xie, Mingxin Li, Ziqi Dai, Dingkun Long, Pengjun Xie, Meishan Zhang, Wenjie Li, Min Zhang• 2024

Related benchmarks

TaskDatasetResultRank
Text-to-Image RetrievalFlickr30K
R@181.1
460
Image-to-Text RetrievalFlickr30K
R@191.1
379
Text-to-Image RetrievalCOCO
Recall@155.6
130
Image-to-Text RetrievalCOCO
R@167.8
123
Text-to-Image RetrievalFlickr30K-CN
R@179.1
99
Image-to-Text RetrievalFlickr30K-CN
R@191.8
99
Text-to-Image RetrievalDCI
R@164.6
68
Image-to-Text RetrievalDCI
R@159.8
68
Video-to-Text retrievalVATEX--
68
Multimodal RetrievalMMEB
Classification Score56.7
50
Showing 10 of 62 rows

Other info

Follow for update