Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MoE++: Accelerating Mixture-of-Experts Methods with Zero-Computation Experts

About

In this work, we aim to simultaneously enhance the effectiveness and efficiency of Mixture-of-Experts (MoE) methods. To achieve this, we propose MoE++, a general and heterogeneous MoE framework that integrates both Feed-Forward Network~(FFN) and zero-computation experts. Specifically, we introduce three types of zero-computation experts: the zero expert, copy expert, and constant expert, which correspond to discard, skip, and replace operations, respectively. This design offers three key advantages: (i) Low Computing Overhead: Unlike the uniform mixing mechanism for all tokens within vanilla MoE, MoE++ allows each token to engage with a dynamic number of FFNs, be adjusted by constant vectors, or even skip the MoE layer entirely. (ii) High Performance: By enabling simple tokens to utilize fewer FFN experts, MoE++ allows more experts to focus on challenging tokens, thereby unlocking greater performance potential than vanilla MoE. (iii) Deployment Friendly: Given that zero-computation experts have negligible parameters, we can deploy all zero-computation experts on each GPU, eliminating the significant communication overhead and expert load imbalance associated with FFN experts distributed across different GPUs. Moreover, we leverage gating residuals, enabling each token to consider the pathway taken in the previous layer when selecting the appropriate experts. Extensive experimental results demonstrate that MoE++ achieves better performance while delivering 1.1-2.1x expert forward throughput compared to a vanilla MoE model of the same size, which lays a solid foundation for developing advanced and efficient MoE-related models.

Peng Jin, Bo Zhu, Li Yuan, Shuicheng Yan• 2024

Related benchmarks

TaskDatasetResultRank
Multi-task Language UnderstandingMMLU
Accuracy24.6
842
Question AnsweringARC Challenge
Accuracy43.2
749
Commonsense ReasoningPIQA
Accuracy78
647
Question AnsweringARC Easy
Normalized Acc66.9
385
Boolean Question AnsweringBoolQ
Accuracy64.9
307
Question AnsweringOBQA
Accuracy40
276
Question AnsweringSciQ
Accuracy89.7
226
Commonsense ReasoningSIQA
Accuracy45.7
96
Logical reasoningLogiQA
Accuracy28.4
84
Multi-level multi-discipline evaluationC-Eval
Accuracy23.6
28
Showing 10 of 10 rows

Other info

Follow for update