Llama 3 Meets MoE: Efficient Upcycling
About
Scaling large language models (LLMs) significantly improves performance but comes with prohibitive computational costs. Mixture-of-Experts (MoE) models offer an efficient alternative, increasing capacity without a proportional rise in compute requirements. However, training MoE models from scratch poses challenges like overfitting and routing instability. We present an efficient training recipe leveraging pre-trained dense checkpoints, training an 8-Expert Top-2 MoE model from Llama 3-8B with less than $1\%$ of typical pre-training compute. Our approach enhances downstream performance on academic benchmarks, achieving a $\textbf{2%}$ improvement in 0-shot accuracy on MMLU, while reaching a Model FLOPs Utilization (MFU) of $\textbf{46.8%}$ during training using our framework. We also integrate online upcycling in NeMo for seamless use of pre-trained weights, enabling cost-effective development of high-capacity MoE models.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Physical Commonsense Reasoning | PIQA | Accuracy78.62 | 329 | |
| Reading Comprehension | BoolQ | Accuracy88.23 | 219 | |
| Multi-task Language Understanding | MMLU (test) | Normalized Accuracy64.1 | 76 | |
| Science Question Answering | SciQ | Normalized Accuracy97 | 44 | |
| Question Answering | OpenBookQA | Normalized Accuracy44.8 | 35 | |
| Truthfulness Evaluation | TruthfulQA | Normalized Accuracy44.22 | 10 | |
| Logical reasoning | LogiQA | Normalized Accuracy30.11 | 2 |