Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

A Hybrid Tensor-Expert-Data Parallelism Approach to Optimize Mixture-of-Experts Training

About

Mixture-of-Experts (MoE) is a neural network architecture that adds sparsely activated expert blocks to a base model, increasing the number of parameters without impacting computational costs. However, current distributed deep learning frameworks are limited in their ability to train high-quality MoE models with large base models. In this work, we present DeepSpeed-TED, a novel, three-dimensional, hybrid parallel algorithm that combines data, tensor, and expert parallelism to enable the training of MoE models with 4 to 8x larger base models than the current state-of-the-art. We also describe memory optimizations in the optimizer step, and communication optimizations that eliminate unnecessary data movement. We implement our approach in DeepSpeed and achieve speedups of 26% over a baseline (i.e. without our communication optimizations) when training a 40 billion parameter MoE model (6.7 billion base model with 16 experts) on 128 V100 GPUs.

Siddharth Singh, Olatunji Ruwase, Ammar Ahmad Awan, Samyam Rajbhandari, Yuxiong He, Abhinav Bhatele• 2023

Related benchmarks

TaskDatasetResultRank
Training EfficiencyMixtral-8x22B Coarse-grained
MFU36.6
5
Training EfficiencyQwen2-57B-A14B Fine-grained
MFU23.1
5
Training EfficiencyMixtral-8x22b-G8T8 Fine-grained
MFU8.7
5
Showing 3 of 3 rows

Other info

Follow for update