Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Don't Just Chase "Highlighted Tokens" in MLLMs: Revisiting Visual Holistic Context Retention

About

Despite their powerful capabilities, Multimodal Large Language Models (MLLMs) suffer from considerable computational overhead due to their reliance on massive visual tokens. Recent studies have explored token pruning to alleviate this problem, which typically uses text-vision cross-attention or [\texttt{CLS}] attention to assess and discard redundant visual tokens. In this work, we identify a critical limitation of such attention-first pruning approaches, i.e., they tend to preserve semantically similar tokens, resulting in pronounced performance drops under high pruning ratios. To this end, we propose {HoloV}, a simple yet effective, plug-and-play visual token pruning framework for efficient inference. Distinct from previous attention-first schemes, HoloV rethinks token retention from a holistic perspective. By adaptively distributing the pruning budget across different spatial crops, HoloV ensures that the retained tokens capture the global visual context rather than isolated salient features. This strategy minimizes representational collapse and maintains task-relevant information even under aggressive pruning. Experimental results demonstrate that our HoloV achieves superior performance across various tasks, MLLM architectures, and pruning ratios compared to SOTA methods. For instance, LLaVA1.5 equipped with HoloV preserves 95.8\% of the original performance after pruning 88.9\% of visual tokens, achieving superior efficiency-accuracy trade-offs.

Xin Zou, Di Lu, Yizhou Wang, Yibo Yan, Yuanhuiyi Lyu, Xu Zheng, Linfeng Zhang, Xuming Hu• 2025

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVizWiz
Accuracy55.3
1525
Object Hallucination EvaluationPOPE
Accuracy85.6
1455
Visual Question AnsweringVQA v2
Accuracy79.5
1362
Visual Question AnsweringTextVQA
Accuracy57.4
1285
Visual Question AnsweringGQA
Accuracy61.7
1249
Text-based Visual Question AnsweringTextVQA
Accuracy78.9
807
Multimodal EvaluationMME--
658
Visual Question AnsweringGQA
Accuracy61.7
505
Science Question AnsweringScienceQA
Accuracy79.8
502
Video Question AnsweringMSRVTT-QA
Accuracy56.5
491
Showing 10 of 61 rows

Other info

Follow for update