Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Rethinking Practical and Efficient Quantization Calibration for Vision-Language Models

About

Post-training quantization (PTQ) is a primary approach for deploying large language models without fine-tuning, and the quantized performance is often strongly affected by the calibration in PTQ. By contrast, in vision-language models (VLMs), substantial differences between visual and text tokens in their activation distributions and sensitivities to quantization error pose significant challenges for effective calibration during PTQ. In this work, we rethink what PTQ calibration should align with in VLMs and propose the Token-level Importance-aware Layer-wise Quantization framework (TLQ). Guided by gradient information, we design a token-level importance integration mechanism for quantization error, and use it to construct a token-level calibration set, enabling a more fine-grained calibration strategy. Furthermore, TLQ introduces a multi-GPU, quantization-exposed layer-wise calibration scheme. This scheme keeps the layer-wise calibration procedure consistent with the true quantized inference path and distributes the complex layer-wise calibration workload across multiple RTX3090 GPUs, thereby reducing reliance on the large memory of A100 GPUs. TLQ is evaluated across two models, three model scales, and two quantization settings, consistently achieving performance improvements across all settings, indicating its strong quantization stability. The code will be released publicly.

Zhenhao Shang, Haizhao Jing, Guoting Wei, Haokui Zhang, Rong Xiao, Jianqing Gao, Peng Wang• 2026

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVizWiz
Accuracy68.4
1525
Visual Question AnsweringTextVQA
Accuracy75.8
1285
Multimodal UnderstandingMMMU
Accuracy45.9
437
Visual Question AnsweringChartQA
Accuracy77.1
371
Chart Question AnsweringChartQA--
356
Optical Character RecognitionOCRBench--
232
Optical Character Recognition BenchmarkingOCRBench
Accuracy74.2
131
Multimodal UnderstandingSEED-2-Plus
Accuracy66.3
110
Multimodal UnderstandingMMMU
MMMU Score49.8
78
Showing 9 of 9 rows

Other info

Follow for update