Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing

About

Sparsely activated Mixture-of-Experts (MoE) models are widely adopted to scale up model capacity without increasing the computation budget. However, vanilla TopK routers are trained in a discontinuous, non-differentiable way, limiting their performance and scalability. To address this issue, we propose ReMoE, a fully differentiable MoE architecture that offers a simple yet effective drop-in replacement for the conventional TopK+Softmax routing, utilizing ReLU as the router instead. We further propose methods to regulate the router's sparsity while balancing the load among experts. ReMoE's continuous nature enables efficient dynamic allocation of computation across tokens and layers, while also exhibiting domain specialization. Our experiments demonstrate that ReMoE consistently outperforms vanilla TopK-routed MoE across various model sizes, expert counts, and levels of granularity. Furthermore, ReMoE exhibits superior scalability with respect to the number of experts, surpassing traditional MoE architectures. The implementation based on Megatron-LM is available at https://github.com/thu-ml/ReMoE.

Ziteng Wang, Jun Zhu, Jianfei Chen• 2024

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy30.26
1460
Question AnsweringARC Challenge
Accuracy20.22
749
Commonsense ReasoningPIQA
Accuracy63.55
647
Question AnsweringARC-E
Accuracy46.68
242
Reading ComprehensionBoolQ
Accuracy54.16
219
Language ModelingLAMBADA
Accuracy35.94
183
Reading ComprehensionRACE
Accuracy29.38
151
Question AnsweringMMLU-Pro
Accuracy54.98
56
Commonsense ReasoningHellaSwag
HellaSwag Score94.95
27
Common Sense ReasoningSWAG
Accuracy92.22
24
Showing 10 of 14 rows

Other info

Follow for update