Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SVD-Prune: Training-Free Token Pruning For Efficient Vision-Language Models

About

Vision-Language Models (VLM) have revolutionized multimodal learning by jointly processing visual and textual information. Yet, they face significant challenges due to the high computational and memory demands of processing long sequences of vision tokens. Many existing methods rely on local heuristics, such as attention scores or token norms. However, these criteria suffer from positional bias and information dispersion, limiting their ability to preserve essential content at high pruning ratios and leading to performance degradation on visually detailed images. To address these issues, we propose SVD-Prune, a trainingfree, plug-and-play token pruning method based on Singular Value Decomposition. It decomposes the vision token feature matrix and selects the top-K tokens using statistical leverage scores, ensuring only tokens contributing most to the dominant global variance are preserved. Experiments show that SVD-Prune consistently outperforms prior pruning methods under extreme vision token budgets, maintaining strong performance even with 32 and 16 vision tokens.

Yvon Apedo, Martyna Poreba, Michal Szczepanski, Samia Bouchafa• 2026

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringTextVQA (test val)
Accuracy57.24
30
Visual Question AnsweringGQA (test val)
Accuracy59.88
25
Showing 2 of 2 rows

Other info

Follow for update