Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

CREM: Compression-Driven Representation Enhancement for Multimodal Retrieval and Comprehension

About

Multimodal Large Language Models (MLLMs) have shown remarkable success in comprehension tasks such as visual description and visual question answering. However, their direct application to embedding-based tasks like retrieval remains challenging due to the discrepancy between output formats and optimization objectives. Previous approaches often employ contrastive fine-tuning to adapt MLLMs for retrieval, but at the cost of losing their generative capabilities. We argue that both generative and embedding tasks fundamentally rely on shared cognitive mechanisms, specifically cross-modal representation alignment and contextual comprehension. To this end, we propose CREM (Compression-driven Representation Enhanced Model), with a unified framework that enhances multimodal representations for retrieval while preserving generative ability. Specifically, we introduce a compression-based prompt design with learnable chorus tokens to aggregate multimodal semantics and a compression-driven training strategy that integrates contrastive and generative objectives through compression-aware attention. Extensive experiments demonstrate that CREM achieves state-of-the-art retrieval performance on MMEB while maintaining strong generative performance on multiple comprehension benchmarks. Our findings highlight that generative supervision can further improve the representational quality of MLLMs under the proposed compression-driven paradigm.

Lihao Liu, Yan Wang, Biao Yang, Da Li, Jiangxia Cao, Yuxiao Luo, Xiang Chen, Xiangyu Wu, Wei Yuan, Fan Yang, Guiguang Ding, Tingting Gao, Guorui Zhou• 2026

Related benchmarks

TaskDatasetResultRank
Multimodal UnderstandingMMStar--
197
Multimodal UnderstandingMMMU
MMMU Score52.1
78
Multimodal RetrievalMMEB
Classification Score68.3
50
Multimodal UnderstandingMMB
Score80.5
30
Caption RetrievalFlickr30K
R@195.5
23
Multimodal UnderstandingAI2D
Score81.9
17
Compositional Image-Text MatchingSugarCrepe
Replacement Score88.7
9
Multimodal ComprehensionMMVet
Score56.7
8
Multimodal ComprehensionHallusion
Score48.8
8
Long-caption Image-to-Text RetrievalShareGPT4V
Recall@190.8
4
Showing 10 of 16 rows

Other info

Follow for update