Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

LightMoE: Reducing Mixture-of-Experts Redundancy through Expert Replacing

About

Mixture-of-Experts (MoE) based Large Language Models (LLMs) have demonstrated impressive performance and computational efficiency. However, their deployment is often constrained by substantial memory demands, primarily due to the need to load numerous expert modules. While existing expert compression techniques like pruning or merging attempt to mitigate this, they often suffer from irreversible knowledge loss or high training overhead. In this paper, we propose a novel expert compression paradigm termed expert replacing, which replaces redundant experts with parameter-efficient modules and recovers their capabilities with low training costs. We find that even a straightforward baseline of this paradigm yields promising performance. Building on this foundation, we introduce LightMoE, a framework that enhances the paradigm by introducing adaptive expert selection, hierarchical expert construction, and an annealed recovery strategy. Experimental results show that LightMoE matches the performance of LoRA fine-tuning at a 30% compression ratio. Even under a more aggressive 50% compression rate, it outperforms existing methods and achieves average performance improvements of 5.6% across five diverse tasks. These findings demonstrate that LightMoE strikes a superior balance among memory efficiency, training efficiency, and model performance.

Jiawei Hao, Zhiwei Hao, Jianyuan Guo, Li Shen, Yong Luo, Han Hu, Dan Zeng• 2026

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval
HumanEval Score55.7
93
Math ReasoningGSM8K
Accuracy (GSM8K)59.7
49
Intent ClassificationSpecialized Tasks Intent
Intent Accuracy74.6
23
Machine TranslationSpecialized Tasks Translation
Translation Quality Score30.4
23
Commonsense ReasoningCommonsense 8 Sub-Tasks
Accuracy (8 Sub-Tasks)56.3
23
Machine TranslationTranslation (test)
BLEU27.7
20
MathGSM8K (test)
Mean@433.7
18
CodeHumanEval (test)
HumanEval Success Rate58.1
14
Intent ClassificationIntent (test)
Intent Accuracy81.2
14
General Language Understanding8 Sub-Tasks (test)
Performance on 8 Sub-Tasks59.2
14
Showing 10 of 10 rows

Other info

Follow for update