Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ST-MoE: Designing Stable and Transferable Sparse Expert Models

About

Scale has opened new frontiers in natural language processing -- but at a high cost. In response, Mixture-of-Experts (MoE) and Switch Transformers have been proposed as an energy efficient path to even larger and more capable language models. But advancing the state-of-the-art across a broad set of natural language tasks has been hindered by training instabilities and uncertain quality during fine-tuning. Our work focuses on these issues and acts as a design guide. We conclude by scaling a sparse model to 269B parameters, with a computational cost comparable to a 32B dense encoder-decoder Transformer (Stable and Transferable Mixture-of-Experts or ST-MoE-32B). For the first time, a sparse model achieves state-of-the-art performance in transfer learning, across a diverse set of tasks including reasoning (SuperGLUE, ARC Easy, ARC Challenge), summarization (XSum, CNN-DM), closed book question answering (WebQA, Natural Questions), and adversarially constructed tasks (Winogrande, ANLI R3).

Barret Zoph, Irwan Bello, Sameer Kumar, Nan Du, Yanping Huang, Jeff Dean, Noam Shazeer, William Fedus• 2022

Related benchmarks

TaskDatasetResultRank
SummarizationXSum (test)
ROUGE-227.1
231
SummarizationXsum
ROUGE-227.1
108
Natural Language UnderstandingSuperGLUE (dev)
Average Score93.2
91
Natural Language UnderstandingSuperGLUE
SGLUE Score91.2
84
Text SummarizationCNN/Daily Mail (test)
ROUGE-220.7
65
Natural Language UnderstandingSuperGLUE (test)
BoolQ Accuracy92.4
63
Science Question AnsweringAI2ARC (test)
Accuracy86.5
6
Showing 7 of 7 rows

Other info

Code

Follow for update