Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

REAP the Experts: Why Pruning Prevails for One-Shot MoE compression

About

Sparsely-activated Mixture-of-Experts (SMoE) models offer efficient pre-training and low latency but their large parameter counts create significant memory overhead, motivating research into expert compression. Contrary to recent findings favouring expert merging on discriminative benchmarks, we find that expert pruning is a superior strategy for generative tasks. We demonstrate that existing merging techniques introduce an irreducible error due to the loss of fine-grained routing control over experts. Leveraging this insight, we propose Router-weighted Expert Activation Pruning (REAP), a novel pruning criterion that considers both router gate-values and expert activation norms to minimize the reconstruction error bound. Across a diverse set of SMoE models ranging from 20B to 1T parameters, REAP consistently outperforms merging and other pruning methods on generative benchmarks, especially at 50% compression. Notably, our method achieves near-lossless compression on code generation tasks with Qwen3-Coder-480B and Kimi-K2, even after pruning 50% of experts.

Mike Lasby, Ivan Lazarevich, Nish Sinnadurai, Sean Lie, Yani Ioannou, Vithursan Thangarasa• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy89.6
1362
MathGSM8K
Accuracy0.873
206
Long-context UnderstandingLongBench v2--
109
MathMATH 500
Accuracy81.8
86
CodingHumanEval+--
83
Mathematical ReasoningMATH 500
Pass@1 Rate92
68
Multiple-Choice QAMultiple-Choice Suite
MC Avg0.685
49
Multiple-choice Question AnsweringMC (test)
MC Avg77.3
46
Creative WritingWildBench
WildBench Score83.1
45
Agentic CodingSWE-bench Verified
Percentage Resolved58
33
Showing 10 of 24 rows

Other info

Follow for update