Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VEQ: Modality-Adaptive Quantization for MoE Vision-Language Models

About

Mixture-of-Experts(MoE) Vision-Language Models (VLMs) offer remarkable performance but incur prohibitive memory and computational costs, making compression essential. Post-Training Quantization (PTQ) is an effective training-free technique to address the massive memory and computation overhead. Existing quantization paradigms fall short as they are oblivious to two critical forms of heterogeneity: the inherent discrepancy between vision and language tokens, and the non-uniform contribution of different experts. To bridge this gap, we propose Visual Expert Quantization (VEQ), a dual-aware quantization framework designed to simultaneously accommodate cross-modal differences and heterogeneity between experts. Specifically, VEQ incorporates 1)Modality-expert-aware Quantization, which utilizes expert activation frequency to prioritize error minimization for pivotal experts, and 2)Modality-affinity-aware Quantization, which constructs an enhanced Hessian matrix by integrating token-expert affinity with modality information to guide the calibration process. Extensive experiments across diverse benchmarks verify that VEQ consistently outperforms state-of-the-art baselines. Specifically, under the W3A16 configuration, our method achieves significant average accuracy gains of 2.04\% on Kimi-VL and 3.09\% on Qwen3-VL compared to the previous SOTA quantization methods, demonstrating superior robustness across various multimodal tasks. Our code will be available at https://github.com/guangshuoqin/VEQ.

Guangshuo Qin, Zhiteng Li, Zheng Chen, Weihang Zhang, Linghe Kong, Yulun Zhang• 2026

Related benchmarks

TaskDatasetResultRank
Multimodal UnderstandingMMMU zero-shot
Zero-shot Accuracy71.56
26
Diagram UnderstandingAI2D zero-shot
Zero-shot Accuracy82.95
5
Document Visual Question AnsweringInfoVQA zero-shot
Zero-shot Accuracy64.48
2
Multimodal BenchmarkingMMBench zero-shot
Zero-shot Accuracy82.3
2
Multimodal EvaluationMME-RealWorld zero-shot
Zero-shot Accuracy48.03
2
Real-world Visual Question AnsweringRealWorldQA zero-shot
Zero-shot Accuracy58.3
2
Science Question AnsweringScienceQA zero-shot
Zero-shot Accuracy89.55
2
Text-based Visual Question AnsweringTextVQA zero-shot
Zero-shot Accuracy78.3
2
Visual Question AnsweringVizWiz zero-shot
Zero-shot Accuracy69.46
2
Showing 9 of 9 rows

Other info

Follow for update