Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Breaking the Modality Barrier: Universal Embedding Learning with Multimodal LLMs

About

The Contrastive Language-Image Pre-training (CLIP) framework has become a widely used approach for multimodal representation learning, particularly in image-text retrieval and clustering. However, its efficacy is constrained by three key limitations: (1) text token truncation, (2) isolated image-text encoding, and (3) deficient compositionality due to bag-of-words behavior. While recent Multimodal Large Language Models (MLLMs) have demonstrated significant advances in generalized vision-language understanding, their potential for learning transferable multimodal representations remains underexplored.In this work, we present UniME (Universal Multimodal Embedding), a novel two-stage framework that leverages MLLMs to learn discriminative representations for diverse downstream tasks. In the first stage, we perform textual discriminative knowledge distillation from a powerful LLM-based teacher model to enhance the embedding capability of the MLLM\'s language component. In the second stage, we introduce hard negative enhanced instruction tuning to further advance discriminative representation learning. Specifically, we initially mitigate false negative contamination and then sample multiple hard negatives per instance within each batch, forcing the model to focus on challenging samples. This approach not only improves discriminative power but also enhances instruction-following ability in downstream tasks. We conduct extensive experiments on the MMEB benchmark and multiple retrieval tasks, including short and long caption retrieval and compositional retrieval. Results demonstrate that UniME achieves consistent performance improvement across all tasks, exhibiting superior discriminative and compositional capabilities.

Tiancheng Gu, Kaicheng Yang, Ziyong Feng, Xingjun Wang, Yanzhao Zhang, Dingkun Long, Yingda Chen, Weidong Cai, Jiankang Deng• 2025

Related benchmarks

TaskDatasetResultRank
Multimodal RetrievalMMEB
Classification Score66.8
50
Text-to-Video RetrievalMSRVTT
Recall@135.1
48
Text-to-Image RetrievalFlickr
R@193.9
35
Spatial-Conditioned ReasoningSCaR
RefCOCO+ Score57.8
27
Image-to-Text RetrievalFlickr
R@195.9
25
Caption RetrievalFlickr30K
R@194.4
23
Image EmbeddingMMEB v1 (test)
Classification66.8
23
Multimodal RankingMMEB
Classification Score66.8
22
Image-Text MatchingSugar-Crepe
Accuracy85
19
Clip-to-Video RetrievalLoVR C2V
R@153.7
18
Showing 10 of 42 rows

Other info

Follow for update