Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

PIO-FVLM: Rethinking Training-Free Visual Token Reduction for VLM Acceleration from an Inference-Objective Perspective

About

Recently, reducing redundant visual tokens in vision-language models (VLMs) to accelerate VLM inference has emerged as a hot topic. However, most existing methods rely on heuristics constructed based on inter-visual-token similarity or cross-modal visual-text similarity, which gives rise to certain limitations in compression performance and practical deployment. In contrast, we propose PIO-FVLM from the perspective of inference objectives, which transforms visual token compression into preserving output result invariance and selects tokens primarily by their importance to this goal. Specially, vision tokens are reordered with the guidance of token-level gradient saliency generated by our designed layer-local proxy loss, a coarse constraint from the current layer to the final result. Then the most valuable vision tokens are selected following the non-maximum suppression (NMS) principle. The proposed PIO-FVLM is training-free and compatible with FlashAttention, friendly to practical application and deployment. It can be deployed independently as an encoder-free method, or combined with encoder compression approaches like VisionZip for use as an encoder-involved method. On LLaVA-Next-7B, PIO-FVLM retains just 11.1% of visual tokens but maintains 97.2% of the original performance, with a 2.67$\times$ prefill speedup, 2.11$\times$ inference speedup, 6.22$\times$ lower FLOPs, and 6.05$\times$ reduced KV Cache overhead. Our code is available at https://github.com/ocy1/PIO-FVLM.

Haokui Zhang, Congyang Ou, Dawei Yan, Peng Wang, Qingsen Yan, Ying Li, Rong Xiao, Chunhua Shen• 2026

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2
Accuracy79.3
1165
Visual Question AnsweringGQA
Accuracy60.1
963
Object Hallucination EvaluationPOPE
Accuracy85.9
935
Text-based Visual Question AnsweringTextVQA
Accuracy75.5
496
Visual Question AnsweringGQA
Accuracy61.8
374
Science Question AnsweringScienceQA--
229
Science Question AnsweringScienceQA (SQA)
Accuracy88.7
128
Multimodal Perception and CognitionMME
Overall Score2.32e+3
103
Comprehensive Multi-modal EvaluationMME--
73
Visual Question AnsweringGQA v1.2 (test)
GQA Score61.1
28
Showing 10 of 16 rows

Other info

Follow for update