Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

GLaM: Efficient Scaling of Language Models with Mixture-of-Experts

About

Scaling language models with more data, compute and parameters has driven significant progress in natural language processing. For example, thanks to scaling, GPT-3 was able to achieve strong results on in-context learning tasks. However, training these large dense models requires significant amounts of computing resources. In this paper, we propose and develop a family of language models named GLaM (Generalist Language Model), which uses a sparsely activated mixture-of-experts architecture to scale the model capacity while also incurring substantially less training cost compared to dense variants. The largest GLaM has 1.2 trillion parameters, which is approximately 7x larger than GPT-3. It consumes only 1/3 of the energy used to train GPT-3 and requires half of the computation flops for inference, while still achieving better overall zero-shot and one-shot performance across 29 NLP tasks.

Nan Du, Yanping Huang, Andrew M. Dai, Simon Tong, Dmitry Lepikhin, Yuanzhong Xu, Maxim Krikun, Yanqi Zhou, Adams Wei Yu, Orhan Firat, Barret Zoph, Liam Fedus, Maarten Bosma, Zongwei Zhou, Tao Wang, Yu Emma Wang, Kellie Webster, Marie Pellat, Kevin Robinson, Kathleen Meier-Hellstern, Toju Duke, Lucas Dixon, Kun Zhang, Quoc V Le, Yonghui Wu, Zhifeng Chen, Claire Cui• 2021

Related benchmarks

TaskDatasetResultRank
Arithmetic ReasoningGSM8K
Accuracy55
155
Science Question AnsweringARC-E
Accuracy78.9
138
Question AnsweringOpenBookQA (OBQA) (test)
OBQA Accuracy63
130
Question AnsweringTriviaQA--
85
Language ModelingLAMBADA (test)--
71
Question AnsweringNQ (test)
EM Accuracy32.5
66
Reading ComprehensionDROP (dev)
F1 Score58.6
63
Machine Reading ComprehensionSQuAD 2.0 (dev)
EM67
57
Question AnsweringNQ (Natural Questions)
EM37.5
55
Open-domain Question AnsweringWebQuestions (WebQ) (test)
Exact Match (EM)41.1
55
Showing 10 of 46 rows

Other info

Follow for update