Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Efficient Quantization of Mixture-of-Experts with Theoretical Generalization Guarantees

About

Sparse Mixture-of-Experts (MoE) allows scaling of language and vision models efficiently by activating only a small subset of experts per input. While this reduces computation, the large number of parameters still incurs substantial memory overhead during inference. Post-training quantization has been explored to address this issue. Because uniform quantization suffers from significant accuracy loss at low bit-widths, mixed-precision methods have been recently explored; however, they often require substantial computation for bit-width allocation and overlook the varying sensitivity of model performance to the quantization of different experts. We propose a theoretically grounded expert-wise mixed precision strategy that assigns bit-width to each expert primarily based on their change in routers l2 norm during training. Experts with smaller changes are shown to capture less frequent but critical features, and model performance is more sensitive to the quantization of these experts, thus requiring higher precision. Furthermore, to avoid allocating experts to lower precision that inject high quantization noise, experts with large maximum intra-neuron variance are also allocated higher precision. Experiments on large-scale MoE models, including Switch Transformer and Mixtral, show that our method achieves higher accuracy than existing approaches, while also reducing inference cost and incurring only negligible overhead for bit-width assignment.

Mohammed Nowaz Rabbani Chowdhury, Kaoutar El Maghraoui, Hsinyu Tsai, Naigang Wang, Geoffrey W. Burr, Liu Liu, Meng Wang• 2026

Related benchmarks

TaskDatasetResultRank
Question AnsweringPIQA
Accuracy81.83
374
Multi-task Language UnderstandingMMLU
Accuracy61.55
321
Mathematical ReasoningMathQA
Accuracy38.29
305
Sentence CompletionHellaSwag
Accuracy81.05
276
Science Question AnsweringARC-C
Accuracy56.31
193
Science Question AnsweringARC-E
Accuracy80.47
184
Commonsense ReasoningWino
Accuracy74.98
102
Reading ComprehensionBoolQ
Accuracy (BoolQ)85.6
55
Average Zero-shot PerformanceAggregate of 8 tasks zero-shot
Accuracy (Zero-Shot Aggregate)70.01
35
Showing 9 of 9 rows

Other info

Follow for update