Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Transcending Scaling Laws with 0.1% Extra Compute

About

Scaling language models improves performance but comes with significant computational costs. This paper proposes UL2R, a method that substantially improves existing language models and their scaling curves with a relatively tiny amount of extra compute. The key idea is to continue training a state-of-the-art large language model (e.g., PaLM) on a few more steps with UL2's mixture-of-denoiser objective. We show that, with almost negligible extra computational costs and no new sources of data, we are able to substantially improve the scaling properties of large language models on downstream metrics. In this paper, we continue training PaLM with UL2R, introducing a new set of models at 8B, 62B, and 540B scale which we call U-PaLM. Impressively, at 540B scale, we show an approximately 2x computational savings rate where U-PaLM achieves the same performance as the final PaLM 540B model at around half its computational budget (i.e., saving $\sim$4.4 million TPUv4 hours). We further show that this improved scaling curve leads to 'emergent abilities' on challenging BIG-Bench tasks -- for instance, U-PaLM does much better than PaLM on some tasks or demonstrates better quality at much smaller scale (62B as opposed to 540B). Overall, we show that U-PaLM outperforms PaLM on many few-shot setups, i.e., English NLP tasks (e.g., commonsense reasoning, question answering), reasoning tasks with chain-of-thought (e.g., GSM8K), multilingual tasks (MGSM, TydiQA), MMLU and challenging BIG-Bench tasks. Finally, we provide qualitative examples showing the new capabilities of U-PaLM for single and multi-span infilling.

Yi Tay, Jason Wei, Hyung Won Chung, Vinh Q. Tran, David R. So, Siamak Shakeri, Xavier Garcia, Huaixiu Steven Zheng, Jinfeng Rao, Aakanksha Chowdhery, Denny Zhou, Donald Metzler, Slav Petrov, Neil Houlsby, Quoc V. Le, Mostafa Dehghani• 2022

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy58.5
983
ReasoningBBH
Accuracy49.6
507
Multitask Language UnderstandingMMLU (test)
Accuracy70.7
303
Question AnsweringStrategyQA
Accuracy0.766
114
Multilingual Question AnsweringTyDiQA
Accuracy54.6
44
Commonsense ReasoningCommonsense Reasoning (BoolQ, PIQA, HellaSwag, Winogrande) zero-shot
Avg Commonsense Accuracy84.9
34
Closed-book Question AnsweringTriviaQA
Accuracy82
12
Closed-book Question AnsweringNatural Questions
Accuracy40.1
12
Multiple-choice Question AnsweringMMLU (test)
Accuracy70.7
12
Reading ComprehensionLAMBADA
Accuracy80.5
10
Showing 10 of 12 rows

Other info

Follow for update