Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Why and When Visual Token Pruning Fails? A Study on Relevant Visual Information Shift in MLLMs Decoding

About

Recently, visual token pruning has been studied to handle the vast number of visual tokens in Multimodal Large Language Models. However, we observe that while existing pruning methods perform reliably on simple visual understanding, they struggle to effectively generalize to complex visual reasoning tasks, a critical gap underexplored in previous studies. Through a systematic analysis, we identify Relevant Visual Information Shift (RVIS) during decoding as the primary failure driver. To address this, we propose Decoding-stage Shift-aware Token Pruning (DSTP), a training-free add-on framework that enables existing pruning methods to align visual tokens with shifting reasoning requirements during the decoding stage. Extensive experiments demonstrate that DSTP significantly mitigates performance degradation of pruning methods in complex reasoning tasks, while consistently yielding performance gains even across visual understanding benchmarks. Furthermore, DSTP demonstrates effectiveness across diverse state-of-the-art architectures, highlighting its generalizability and efficiency with minimal computational overhead.

Jiwan Kim, Kibum Kim, Wonjoong Kim, Byung-Kwan Lee, Chanyoung Park• 2026

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringGQA
Accuracy61.27
1249
Visual Question AnsweringScienceQA
Accuracy92.08
370
Visual Mathematical ReasoningWeMath
Accuracy43.62
127
Visual Question AnsweringSQA
Accuracy92.08
41
Visual ReasoningMMMU-Pro
Avg@834.27
29
Visual Question AnsweringTextVQA
Accuracy77.95
26
Visual ReasoningMathVerse
Accuracy52.54
26
Visual ReasoningDynaMath
Accuracy57.7
26
Visual ReasoningLogicVista
Accuracy48.76
26
Visual UnderstandingScienceQA, TextVQA, and GQA
Avg Relative Accuracy97.5
26
Showing 10 of 13 rows

Other info

Follow for update