Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MoEQuant: Enhancing Quantization for Mixture-of-Experts Large Language Models via Expert-Balanced Sampling and Affinity Guidance

About

Mixture-of-Experts (MoE) large language models (LLMs), which leverage dynamic routing and sparse activation to enhance efficiency and scalability, have achieved higher performance while reducing computational costs. However, these models face significant memory overheads, limiting their practical deployment and broader adoption. Post-training quantization (PTQ), a widely used method for compressing LLMs, encounters severe accuracy degradation and diminished generalization performance when applied to MoE models. This paper investigates the impact of MoE's sparse and dynamic characteristics on quantization and identifies two primary challenges: (1) Inter-expert imbalance, referring to the uneven distribution of samples across experts, which leads to insufficient and biased calibration for less frequently utilized experts; (2) Intra-expert imbalance, arising from MoE's unique aggregation mechanism, which leads to varying degrees of correlation between different samples and their assigned experts. To address these challenges, we propose MoEQuant, a novel quantization framework tailored for MoE LLMs. MoE-Quant includes two novel techniques: 1) Expert-Balanced Self-Sampling (EBSS) is an efficient sampling method that efficiently constructs a calibration set with balanced expert distributions by leveraging the cumulative probabilities of tokens and expert balance metrics as guiding factors. 2) Affinity-Guided Quantization (AGQ), which incorporates affinities between experts and samples into the quantization process, thereby accurately assessing the impact of individual samples on different experts within the MoE layer. Experiments demonstrate that MoEQuant achieves substantial performance gains (more than 10 points accuracy gain in the HumanEval for DeepSeekMoE-16B under 4-bit quantization) and boosts efficiency.

Xing Hu, Zhixuan Chen, Dawei Yang, Zukang Xu, Chen Xu, Zhihang Yuan, Sifan Zhou, Jiangyong Yu• 2025

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity4.9
2839
Question AnsweringARC Easy--
597
Question AnsweringPIQA
Accuracy60.38
374
Mathematical ReasoningMathQA
Accuracy22.42
305
Sentence CompletionHellaSwag
Accuracy40.12
276
Multiple-choice Question AnsweringARC Easy
Accuracy49.85
188
Question AnsweringARC Challenge
Accuracy (ARC)26.09
142
ReasoningWinoGrande (WG)
Accuracy51.99
135
Language ModelingLambada OpenAI
Accuracy13.22
127
Code GenerationHumanEval
HumanEval Score18.12
93
Showing 10 of 17 rows

Other info

Follow for update