Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ResPrune: Text-Conditioned Subspace Reconstruction for Visual Token Pruning in Large Vision-Language Models

About

Large Vision-Language Models (LVLMs) rely on dense visual tokens to capture fine-grained visual information, but processing all these tokens incurs substantial computational and memory overhead during inference. To address this issue, we propose ResPrune, a training-free visual token pruning framework that enables efficient LVLM inference by selecting a compact yet informative subset of visual tokens. ResPrune formulates visual token pruning as a subspace reconstruction problem and employs a greedy subspace expansion strategy guided by residual energy, allowing it to preserve the geometric structure of the original visual token space. To further incorporate cross modal alignment, the selection process is conditioned on textual relevance, encouraging the retention of tokens that are both informative and instruction-relevant. The proposed method is lightweight and model-agnostic, and can be seamlessly integrated into existing LVLM pipelines without retraining or architectural modifications. Extensive experiments on multiple LVLM backbones, including LLaVA-1.5, LLaVA-NeXT, and Qwen2.5-VL, demonstrate that ResPrune consistently outperforms existing pruning approaches across a wide range of benchmarks, while achieving effective reductions in computation, memory consumption, and inference latency.

Xu Li, Yi Zheng, Yuxuan Liang, Zhe Liu, Xiaolei Chen, Haotian Chen, Rui Zhu, Xiangyang Xue• 2026

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVizWiz
Accuracy57.9
1525
Object Hallucination EvaluationPOPE
Accuracy88.3
1455
Visual Question AnsweringGQA
Accuracy63.3
1249
Text-based Visual Question AnsweringTextVQA
Accuracy59.9
807
Visual Question AnsweringTextVQA (val)
VQA Score80.3
343
Comprehensive Multi-modal EvaluationMME
Total Score1.80e+3
113
Science Question AnsweringScienceQA SQA-I
Accuracy69.5
103
Visual Question AnsweringVQA v2
Accuracy79.5
101
Object Hallucination EvaluationPOPE (test)--
79
Multimodal UnderstandingMMBench English--
55
Showing 10 of 14 rows

Other info

Follow for update