Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Efficient Large Scale Language Modeling with Mixtures of Experts

About

Mixture of Experts layers (MoEs) enable efficient scaling of language models through conditional computation. This paper presents a detailed empirical study of how autoregressive MoE language models scale in comparison with dense models in a wide range of settings: in- and out-of-domain language modeling, zero- and few-shot priming, and full-shot fine-tuning. With the exception of fine-tuning, we find MoEs to be substantially more compute efficient. At more modest training budgets, MoEs can match the performance of dense models using $\sim$4 times less compute. This gap narrows at scale, but our largest MoE model (1.1T parameters) consistently outperforms a compute-equivalent dense model (6.7B parameters). Overall, this performance gap varies greatly across tasks and domains, suggesting that MoE and dense models generalize differently in ways that are worthy of future study. We make our code and models publicly available for research use.

Mikel Artetxe, Shruti Bhosale, Naman Goyal, Todor Mihaylov, Myle Ott, Sam Shleifer, Xi Victoria Lin, Jingfei Du, Srinivasan Iyer, Ramakanth Pasunuru, Giri Anantharaman, Xian Li, Shuohui Chen, Halil Akin, Mandeep Baines, Louis Martin, Xing Zhou, Punit Singh Koura, Brian O'Horo, Jeff Wang, Luke Zettlemoyer, Mona Diab, Zornitsa Kozareva, Ves Stoyanov• 2021

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy55.9
1460
Question AnsweringARC Challenge--
749
Commonsense ReasoningPIQA
Accuracy76.9
647
Question AnsweringOpenBookQA
Accuracy29.6
465
Natural Language UnderstandingGLUE
SST-294.5
452
Mathematical ReasoningMATH (test)--
433
Question AnsweringARC Easy
Normalized Acc70.2
385
Physical Interaction Question AnsweringPIQA
Accuracy76.9
323
Multitask Language UnderstandingMMLU (test)--
303
Question AnsweringTriviaQA
Accuracy32.3
210
Showing 10 of 25 rows

Other info

Follow for update