Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Fine-Grained Post-Training Quantization for Large Vision Language Models with Quantization-Aware Integrated Gradients

About

Large Vision Language Models (LVLMs) have achieved remarkable success in a range of downstream tasks that require multimodal interaction, but their capabilities come with substantial computational and memory overhead, which hinders practical deployment. Among numerous acceleration techniques, post-training quantization is a popular and effective strategy for reducing memory cost and accelerating inference. However, existing LVLM quantization methods typically measure token sensitivity at the modality level, which fails to capture the complex cross-token interactions and falls short in quantitatively measuring the quantization error at the token level. As tokens interact within the model, the distinction between modalities gradually diminishes, suggesting the need for fine-grained calibration. Inspired by axiomatic attribution in mechanistic interpretability, we introduce a fine-grained quantization strategy on Quantization-aware Integrated Gradients (QIG), which leverages integrated gradients to quantitatively evaluate token sensitivity and push the granularity from modality level to token level, reflecting both inter-modality and intra-modality dynamics. Extensive experiments on multiple LVLMs under both W4A8 and W3A16 settings show that our method improves accuracy across models and benchmarks with negligible latency overhead. For example, under 3-bit weight-only quantization, our method improves the average accuracy of LLaVA-onevision-7B by 1.60%, reducing the gap to its full-precision counterpart to only 1.33%. The code is available at https://github.com/ucas-xiang/QIG.

Ziwei Xiang, Fanhu Zeng, Hongjian Fang, Rui-Qi Wang, Renxing Chen, Yanan Zhu, Yi Chen, Peipei Yang, Xu-Yao Zhang• 2026

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVizWiz
Accuracy67.12
1525
Physical Commonsense ReasoningPIQA
Accuracy75.95
572
Multimodal UnderstandingMMMU
Accuracy50.89
437
Visual Question AnsweringChartQA
Accuracy85.24
371
Visual Question AnsweringScienceQA
Accuracy96.73
370
Visual Question AnsweringAI2D
Accuracy79.73
249
Science Question AnsweringARC-C
Accuracy39.85
193
Science Question AnsweringARC-E
Accuracy67.17
184
Multi-Domain KnowledgeMMLU
MMLU Multi-Domain Knowledge Acc32.01
46
Language ModelingStandard Language Modeling Benchmark
Perplexity (PPL)6.19
2
Showing 10 of 10 rows

Other info

Follow for update