Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Llama 3 Meets MoE: Efficient Upcycling

About

Scaling large language models (LLMs) significantly improves performance but comes with prohibitive computational costs. Mixture-of-Experts (MoE) models offer an efficient alternative, increasing capacity without a proportional rise in compute requirements. However, training MoE models from scratch poses challenges like overfitting and routing instability. We present an efficient training recipe leveraging pre-trained dense checkpoints, training an 8-Expert Top-2 MoE model from Llama 3-8B with less than $1\%$ of typical pre-training compute. Our approach enhances downstream performance on academic benchmarks, achieving a $\textbf{2%}$ improvement in 0-shot accuracy on MMLU, while reaching a Model FLOPs Utilization (MFU) of $\textbf{46.8%}$ during training using our framework. We also integrate online upcycling in NeMo for seamless use of pre-trained weights, enabling cost-effective development of high-capacity MoE models.

Aditya Vavre, Ethan He, Dennis Liu, Zijie Yan, June Yang, Nima Tajbakhsh, Ashwath Aithal• 2024

Related benchmarks

TaskDatasetResultRank
Physical Commonsense ReasoningPIQA
Accuracy78.62
329
Reading ComprehensionBoolQ
Accuracy88.23
219
Multi-task Language UnderstandingMMLU (test)
Normalized Accuracy64.1
76
Science Question AnsweringSciQ
Normalized Accuracy97
44
Question AnsweringOpenBookQA
Normalized Accuracy44.8
35
Truthfulness EvaluationTruthfulQA
Normalized Accuracy44.22
10
Logical reasoningLogiQA
Normalized Accuracy30.11
2
Showing 7 of 7 rows

Other info

Code

Follow for update