Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SwiftVLM: Efficient Vision-Language Model Inference via Cross-Layer Token Bypass

About

Visual token pruning is a promising approach for reducing the computational cost of vision-language models (VLMs), and existing methods often rely on early pruning decisions to improve efficiency. While effective on coarse-grained reasoning tasks, they suffer from significant performance degradation on tasks requiring fine-grained visual details. Through layer-wise analysis, we reveal substantial discrepancies in visual token importance across layers, showing that tokens deemed unimportant at shallow layers can later become highly relevant for text-conditioned reasoning. To avoid irreversible critical information loss caused by premature pruning, we introduce a new pruning paradigm, termed bypass, which preserves unselected visual tokens and forwards them to subsequent pruning stages for re-evaluation. Building on this paradigm, we propose SwiftVLM, a simple and training-free method that performs pruning at model-specific layers with strong visual token selection capability, while enabling independent pruning decisions across layers. Experiments across multiple VLMs and benchmarks demonstrate that SwiftVLM consistently outperforms existing pruning strategies, achieving superior accuracy-efficiency trade-offs and more faithful visual token selection behavior.

Chen Qian, Xinran Yu, Danyang Li, Guoxuan Chi, Zheng Yang, Qiang Ma, Xin Miao• 2026

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringTextVQA
Accuracy45.3
1117
Object Hallucination EvaluationPOPE--
935
Multi-modal BenchmarkMMBench
Accuracy68
40
Visual Question AnsweringSQA
Accuracy69
23
LocalizationRefCOCO
Accuracy66.6
13
LocalizationRefCOCO+
Accuracy58.5
13
LocalizationRefCOCOg
Accuracy60.6
13
Visual Question AnsweringGQA
Accuracy63.6
11
Text-based Visual Question AnsweringVQAText
Accuracy64.1
7
Multi-modal BenchmarkMME
Total Score1.50e+3
3
Showing 10 of 10 rows

Other info

Follow for update