VEQ: Modality-Adaptive Quantization for MoE Vision-Language Models
About
Mixture-of-Experts(MoE) Vision-Language Models (VLMs) offer remarkable performance but incur prohibitive memory and computational costs, making compression essential. Post-Training Quantization (PTQ) is an effective training-free technique to address the massive memory and computation overhead. Existing quantization paradigms fall short as they are oblivious to two critical forms of heterogeneity: the inherent discrepancy between vision and language tokens, and the non-uniform contribution of different experts. To bridge this gap, we propose Visual Expert Quantization (VEQ), a dual-aware quantization framework designed to simultaneously accommodate cross-modal differences and heterogeneity between experts. Specifically, VEQ incorporates 1)Modality-expert-aware Quantization, which utilizes expert activation frequency to prioritize error minimization for pivotal experts, and 2)Modality-affinity-aware Quantization, which constructs an enhanced Hessian matrix by integrating token-expert affinity with modality information to guide the calibration process. Extensive experiments across diverse benchmarks verify that VEQ consistently outperforms state-of-the-art baselines. Specifically, under the W3A16 configuration, our method achieves significant average accuracy gains of 2.04\% on Kimi-VL and 3.09\% on Qwen3-VL compared to the previous SOTA quantization methods, demonstrating superior robustness across various multimodal tasks. Our code will be available at https://github.com/guangshuoqin/VEQ.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multimodal Understanding | MMMU zero-shot | Zero-shot Accuracy71.56 | 26 | |
| Diagram Understanding | AI2D zero-shot | Zero-shot Accuracy82.95 | 5 | |
| Document Visual Question Answering | InfoVQA zero-shot | Zero-shot Accuracy64.48 | 2 | |
| Multimodal Benchmarking | MMBench zero-shot | Zero-shot Accuracy82.3 | 2 | |
| Multimodal Evaluation | MME-RealWorld zero-shot | Zero-shot Accuracy48.03 | 2 | |
| Real-world Visual Question Answering | RealWorldQA zero-shot | Zero-shot Accuracy58.3 | 2 | |
| Science Question Answering | ScienceQA zero-shot | Zero-shot Accuracy89.55 | 2 | |
| Text-based Visual Question Answering | TextVQA zero-shot | Zero-shot Accuracy78.3 | 2 | |
| Visual Question Answering | VizWiz zero-shot | Zero-shot Accuracy69.46 | 2 |