Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

HAWK: Head Importance-Aware Visual Token Pruning in Multimodal Models

About

In multimodal large language models (MLLMs), the surge of visual tokens significantly increases the inference time and computational overhead, making them impractical for real-time or resource-constrained applications. Visual token pruning is a promising strategy for reducing the cost of MLLM inference by removing redundant visual tokens. Existing research usually assumes that all attention heads contribute equally to the visual interpretation. However, our study reveals that different heads may capture distinct visual semantics and inherently play distinct roles in visual processing. In light of this observation, we propose HAWK, a head importance-aware visual token pruning method that perceives the varying importance of attention heads in visual tasks to maximize the retention of crucial tokens. By leveraging head importance weights and text-guided attention to assess visual token significance, HAWK effectively retains task-relevant visual tokens while removing redundant ones. The proposed HAWK is entirely training-free and can be seamlessly applied to various MLLMs. Extensive experiments on multiple mainstream vision-language benchmarks demonstrate that HAWK achieves state-of-the-art accuracy. When applied to Qwen2.5-VL, HAWK retains 96.0% of the original accuracy after pruning 80.2% of the visual tokens. Additionally, it reduces end-to-end latency to 74.4% of the original and further decreases GPU memory usage across the tested models. The code is available at https://github.com/peppery77/HAWK.git.

Qihui Zhu, Tao Zhang, Yuchen Wang, Zijian Wen, Mengjie Zhang, Shuangwu Chen, Xiaobin Tan, Jian Yang, Yang Liu, Zhenhua Dong, Xianzhi Yu, Yinfei Pan• 2026

Related benchmarks

TaskDatasetResultRank
Real-world Visual Question AnsweringRealworldQA--
140
Chart Question AnsweringChartQA
Score83.6
21
Multimodal Large Language Model EvaluationMME
MME Score2.31e+3
14
Multimodal Understanding and ReasoningImage Benchmarks HallBench, MME, TextVQA, ChartQA, AI2D, RealWorldQA, CCBench, OCRVQA, SQA-IMG, POPE
HallBench Score46.5
13
Multimodal Visual Question AnsweringVLMEvalKit Image Benchmarks
HallBench Accuracy44.1
13
Text-based Visual Question AnsweringTextVQA
Score85
10
Diagram UnderstandingAI2D
Score79.9
10
Multi-modal UnderstandingMME
MME Score2.31e+3
7
Multimodal Question AnsweringMLLM Evaluation Suite (HallBench, MME, TextVQA, ChartQA, AI2D, RealWorldQA, CCBench, OCRVQA, SQA-IMG, POPE) (test)
HallBench48.5
7
Showing 9 of 9 rows

Other info

Follow for update