Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

GME: Improving Universal Multimodal Retrieval by Multimodal LLMs

About

Universal Multimodal Retrieval (UMR) aims to enable search across various modalities using a unified model, where queries and candidates can consist of pure text, images, or a combination of both. Previous work has attempted to adopt multimodal large language models (MLLMs) to realize UMR using only text data. However, our preliminary experiments demonstrate that more diverse multimodal training data can further unlock the potential of MLLMs. Despite its effectiveness, the existing multimodal training data is highly imbalanced in terms of modality, which motivates us to develop a training data synthesis pipeline and construct a large-scale, high-quality fused-modal training dataset. Based on the synthetic training data, we develop the General Multimodal Embedder (GME), an MLLM-based dense retriever designed for UMR. Furthermore, we construct a comprehensive UMR Benchmark (UMRB) to evaluate the effectiveness of our approach. Experimental results show that our method achieves state-of-the-art performance among existing UMR methods. Last, we provide in-depth analyses of model scaling and training strategies, and perform ablation studies on both the model and synthetic data.

Xin Zhang, Yanzhao Zhang, Wen Xie, Mingxin Li, Ziqi Dai, Dingkun Long, Pengjun Xie, Meishan Zhang, Wenjie Li, Min Zhang• 2024

Related benchmarks

TaskDatasetResultRank
Text-to-Image RetrievalFlickr30K
R@181.1
531
Image-to-Text RetrievalFlickr30K
R@191.1
429
Text-to-Image RetrievalCOCO
Recall@155.6
156
Image-to-Text RetrievalCOCO
R@167.8
149
Text-to-Image RetrievalFlickr30K-CN
R@179.1
99
Image-to-Text RetrievalFlickr30K-CN
R@191.8
99
Video-to-Text retrievalVATEX--
80
Text-to-Image RetrievalDCI
R@164.6
79
Image-to-Text RetrievalDCI
R@159.8
79
Image RetrievalFashion200k (test)
Recall@110.31
58
Showing 10 of 104 rows
...

Other info

Follow for update