Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

IPCV: Information-Preserving Compression for MLLM Visual Encoders

About

Multimodal Large Language Models (MLLMs) deliver strong vision-language performance but at high computational cost, driven by numerous visual tokens processed by the Vision Transformer (ViT) encoder. Existing token pruning strategies are inadequate: LLM-stage token pruning overlooks the ViT's overhead, while conventional ViT token pruning, without language guidance, risks discarding textually critical visual cues and introduces feature distortions amplified by the ViT's bidirectional attention. To meet these challenges, we propose IPCV, a training-free, information-preserving compression framework for MLLM visual encoders. IPCV enables aggressive token pruning inside the ViT via Neighbor-Guided Reconstruction (NGR) that temporarily reconstructs pruned tokens to participate in attention with minimal overhead, then fully restores them before passing to the LLM. Besides, we introduce Attention Stabilization (AS) to further alleviate the negative influence from token pruning by approximating the K/V of pruned tokens. It can be directly applied to previous LLM-side token pruning methods to enhance their performance. Extensive experiments show that IPCV substantially reduces end-to-end computation and outperforms state-of-the-art training-free token compression methods across diverse image and video benchmarks. Our code is available at https://github.com/Perkzi/IPCV.

Yuan Chen, Zichen Wen, Yuzhou Wu, Xuyang Liu, Shuang Chen, Junpeng Ma, Weijia Li, Conghui He, Linfeng Zhang• 2025

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringGQA (test)
Accuracy61.2
119
Visual Question AnsweringVizWiz (test)
Accuracy89.9
66
Object Hallucination EvaluationPOPE (test)
Accuracy88.3
44
Multi-modal EvaluationMME (test)--
32
Multimodal UnderstandingMultimodal Evaluation Suite (GQA, MMBench, MMBench-CN, MME, POPE, SEED-Bench, TextVQA, VizWiz, OCRBench)
GQA Score60.5
21
Text-based Visual Question AnsweringTextVQA (test)--
10
Multimodal Question AnsweringMMBench EN (test)
Accuracy84.5
9
OCR EvaluationOCRBench (test)
Score47.6
9
Multimodal Question AnsweringMMBench CN (test)
Accuracy83.2
9
Showing 9 of 9 rows

Other info

Follow for update