Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

VLMQ: Token Saliency-Driven Post-Training Quantization for Vision-language Models

About

Post-training quantization (PTQ) has emerged as an effective technique for compressing large models and accelerating inference without retraining. While PTQ has been extensively studied in large language models (LLMs), its application to vision-language models (VLMs) remains underexplored. In this work, we identify two intrinsic characteristics of VLM activations: 1) visual over-representation, where vision tokens are excessive and often redundant, and 2) modality gap, which refers to the clear distribution gap between text and vision tokens in the latent feature space. Together, these two factors significantly deteriorate quantization performance but have been overlooked by existing PTQ methods. To address these challenges, we propose VLMQ, A VLM-tailored PTQ framework that selectively prioritizes salient tokens while suppressing redundant ones during quantization. In particular, we introduce a gradient-driven importance factor to capture the token-wise importance variance, the effectiveness of which is substantiated through both empirical and theoretical analysis. To ensure efficiency, we propose to use lightweight block-wise backpropagation for factor acquisition. Finally, we reformulate the optimization objective into an importance-aware form to preserve important activation information. Extensive evaluations on 8 benchmarks across 0.5B$\sim$32B VLMs demonstrate the state-of-the-art (SOTA) performance of our VLMQ, particularly under low-bit settings. For example, it achieves a substantial \textbf{16.45\%} improvement on MME-RealWorld under 2-bit quantization.

Yufei Xue, Yushi Huang, Jiawei Shao, Lunjie Zhu, Chi Zhang, Xuelong Li, Jun Zhang• 2025

Related benchmarks

TaskDatasetResultRank
Science Question AnsweringScienceQA
Accuracy88.42
502
Visual Question AnsweringChartQA
Accuracy80.28
371
Chart Question AnsweringChartQA
Accuracy79.04
356
Visual Question AnsweringTextVQA (val)
VQA Score81.48
343
OCR EvaluationOCRBench
Score80.3
329
Text-based Visual Question AnsweringTextVQA (val)
Accuracy81.82
262
Optical Character RecognitionOCRBench
Score82.6
232
Document Visual Question AnsweringDocVQA (val)
Accuracy93.83
157
Chart UnderstandingChartQA
Accuracy62.76
127
Multimodal UnderstandingSEEDBench2 Plus
Accuracy69.43
74
Showing 10 of 19 rows

Other info

Follow for update