Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Sparse Upcycling: Training Mixture-of-Experts from Dense Checkpoints

About

Training large, deep neural networks to convergence can be prohibitively expensive. As a result, often only a small selection of popular, dense models are reused across different contexts and tasks. Increasingly, sparsely activated models, which seek to decouple model size from computation costs, are becoming an attractive alternative to dense models. Although more efficient in terms of quality and computation cost, sparse models remain data-hungry and costly to train from scratch in the large scale regime. In this work, we propose sparse upcycling -- a simple way to reuse sunk training costs by initializing a sparsely activated Mixture-of-Experts model from a dense checkpoint. We show that sparsely upcycled T5 Base, Large, and XL language models and Vision Transformer Base and Large models, respectively, significantly outperform their dense counterparts on SuperGLUE and ImageNet, using only ~50% of the initial dense pretraining sunk cost. The upcycled models also outperform sparse models trained from scratch on 100% of the initial dense pretraining computation budget.

Aran Komatsuzaki, Joan Puigcerver, James Lee-Thorp, Carlos Riquelme Ruiz, Basil Mustafa, Joshua Ainslie, Yi Tay, Mostafa Dehghani, Neil Houlsby• 2022

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningWinoGrande--
1085
Multitask Language UnderstandingMMLU
Accuracy74.6
413
Commonsense ReasoningHellaSwag
HellaSwag Accuracy80.38
350
Science Question AnsweringARC Challenge
Accuracy63.31
342
Logical reasoningBBH
Accuracy70.16
201
Medical Visual Question AnsweringVQA-RAD--
198
Graduate-level Question AnsweringGPQA
Accuracy36.28
184
Science Question AnsweringARC Easy
Accuracy86.32
155
Language UnderstandingMMLU 5-shot--
132
Image ClassificationVTAB--
103
Showing 10 of 33 rows

Other info

Follow for update