Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MI-Pruner: Crossmodal Mutual Information-guided Token Pruner for Efficient MLLMs

About

For multimodal large language models (MLLMs), visual information is relatively sparse compared with text. As a result, research on visual pruning emerges for efficient inference. Current approaches typically measure token importance based on the attention scores in the visual encoder or in the LLM decoder, then select visual tokens with high attention scores while pruning others. In this paper, we pursue a different and more surgical approach. Instead of relying on mechanism-specific signals, we directly compute Mutual Information (MI) between visual and textual features themselves, prior to their interaction. This allows us to explicitly measure crossmodal dependency at the feature levels. Our MI-Pruner is simple, efficient and non-intrusive, requiring no access to internal attention maps or architectural modifications. Experimental results demonstrate that our approach outperforms previous attention-based pruning methods with minimal latency.

Jiameng Li, Aleksei Tiulpin, Matthew B. Blaschko• 2026

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE--
1455
Science Question AnsweringScienceQA (SQA)
Accuracy69.81
273
Multimodal EvaluationMM-Vet--
180
Video Question AnsweringMSVD
Accuracy70.6
152
Video Question AnsweringMSRVTT
Accuracy56.4
100
Visual Question AnsweringGQA
GQA Score57.01
85
Multimodal EvaluationMME
MME-P Score1.43e+3
73
Visual Question AnsweringTextVQA
TextVQA Accuracy55.9
67
Video Question AnsweringTGIF
Top-1 Acc15.7
58
Showing 9 of 9 rows

Other info

Follow for update