Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Merging Experts into One: Improving Computational Efficiency of Mixture of Experts

About

Scaling the size of language models usually leads to remarkable advancements in NLP tasks. But it often comes with a price of growing computational cost. Although a sparse Mixture of Experts (MoE) can reduce the cost by activating a small subset of parameters (e.g., one expert) for each input, its computation escalates significantly if increasing the number of activated experts, limiting its practical utility. Can we retain the advantages of adding more experts without substantially increasing the computational costs? In this paper, we first demonstrate the superiority of selecting multiple experts and then propose a computation-efficient approach called \textbf{\texttt{Merging Experts into One}} (MEO), which reduces the computation cost to that of a single expert. Extensive experiments show that MEO significantly improves computational efficiency, e.g., FLOPS drops from 72.0G of vanilla MoE to 28.6G (MEO). Moreover, we propose a token-level attention block that further enhances the efficiency and performance of token-level MEO, e.g., 83.3\% (MEO) vs. 82.6\% (vanilla MoE) average score on the GLUE benchmark. Our code will be released upon acceptance. Code will be released at: \url{https://github.com/Shwai-He/MEO}.

Shwai He, Run-Ze Fan, Liang Ding, Li Shen, Tianyi Zhou, Dacheng Tao• 2023

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText
PPL21.63
479
Natural Language UnderstandingGLUE (test)
SST-2 Accuracy94.27
416
SummarizationXsum
ROUGE-219.41
108
Natural Language UnderstandingSuperGLUE (test)
BoolQ Accuracy72.11
63
Question AnsweringSQuAD
Exact Match82.87
50
Text SummarizationCNNDM
ROUGE-219.8
11
Showing 6 of 6 rows

Other info

Follow for update