Expert Divergence Learning for MoE-based Language Models
About
The Mixture-of-Experts (MoE) architecture is a powerful technique for scaling language models, yet it often suffers from expert homogenization, where experts learn redundant functionalities, thereby limiting MoE's full potential. To address this, we introduce Expert Divergence Learning, a novel pre-training strategy that explicitly encourages functional specialization among experts. Our method incorporates a label-driven auxiliary loss that leverages domain labels inherent in pre-training corpora to maximize the Jensen-Shannon Divergence between the expert routing distributions of different data domains. This optimization objective guides the model to develop diverged routing policies for varied domains and closer routing policies for the same domain, which leads to emergent and organized expert specialization. We validate our approach by pre-training MoE models of up to 15 billion parameters from scratch. Experimental results demonstrate that models trained with Expert Divergence Learning not only achieve a lower language modeling loss but also exhibit significant performance improvements across a diverse range of downstream benchmarks. Further analysis confirms that our method effectively mitigates expert homogenization and brings greater functional specialization, all with negligible computational overhead during training.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Question Answering | ARC-E | Accuracy60.85 | 416 | |
| Question Answering | ARC-C | Accuracy0.3525 | 87 | |
| Language Understanding | MMLU | MMLU Score33.21 | 70 | |
| Language Understanding | CMMLU | Accuracy36.58 | 42 | |
| Reading Comprehension | RACE-m | Accuracy0.3466 | 31 | |
| Reading Comprehension | RACE | RACE Middle Score34.54 | 21 | |
| Reading Comprehension | RACE-h | Accuracy28.73 | 18 | |
| Language Understanding | CEval | Accuracy33.81 | 17 | |
| Question Answering | ARC | ARCe Accuracy59.08 | 14 |