Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Rethinking Practical and Efficient Quantization Calibration for Vision-Language Models

About

Post-training quantization (PTQ) is a primary approach for deploying large language models without fine-tuning, and the quantized performance is often strongly affected by the calibration in PTQ. By contrast, in vision-language models (VLMs), substantial differences between visual and text tokens in their activation distributions and sensitivities to quantization error pose significant challenges for effective calibration during PTQ. In this work, we rethink what PTQ calibration should align with in VLMs and propose the Token-level Importance-aware Layer-wise Quantization framework (TLQ). Guided by gradient information, we design a token-level importance integration mechanism for quantization error, and use it to construct a token-level calibration set, enabling a more fine-grained calibration strategy. Furthermore, TLQ introduces a multi-GPU, quantization-exposed layer-wise calibration scheme. This scheme keeps the layer-wise calibration procedure consistent with the true quantized inference path and distributes the complex layer-wise calibration workload across multiple RTX3090 GPUs, thereby reducing reliance on the large memory of A100 GPUs. TLQ is evaluated across two models, three model scales, and two quantization settings, consistently achieving performance improvements across all settings, indicating its strong quantization stability. The code will be released publicly.

Zhenhao Shang, Haizhao Jing, Guoting Wei, Haokui Zhang, Rong Xiao, Jianqing Gao, Peng Wang• 2026

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringTextVQA
Accuracy75.8
1117
Visual Question AnsweringVizWiz
Accuracy68.4
1043
Multimodal UnderstandingMMMU
Accuracy45.9
275
Visual Question AnsweringChartQA
Accuracy77.1
239
Chart Question AnsweringChartQA--
229
Optical Character Recognition BenchmarkingOCRBench
Accuracy74.2
109
Multimodal UnderstandingSEED-2-Plus
Accuracy66.3
99
Optical Character RecognitionOCRBench
OCRBench Score81
83
Multimodal UnderstandingMMMU
MMMU Score49.8
78
Showing 9 of 9 rows

Other info

Follow for update