Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Auxiliary-Loss-Free Load Balancing Strategy for Mixture-of-Experts

About

For Mixture-of-Experts (MoE) models, an unbalanced expert load will lead to routing collapse or increased computational overhead. Existing methods commonly employ an auxiliary loss to encourage load balance, but a large auxiliary loss will introduce non-negligible interference gradients into training and thus impair the model performance. In order to control load balance while not producing undesired gradients during training, we propose Loss-Free Balancing, featured by an auxiliary-loss-free load balancing strategy. To be specific, before the top-K routing decision, Loss-Free Balancing will first apply an expert-wise bias to the routing scores of each expert. By dynamically updating the bias of each expert according to its recent load, Loss-Free Balancing can consistently maintain a balanced distribution of expert load. In addition, since Loss-Free Balancing does not produce any interference gradients, it also elevates the upper bound of model performance gained from MoE training. We validate the performance of Loss-Free Balancing on MoE models with up to 3B parameters trained on up to 200B tokens. Experimental results show that Loss-Free Balancing achieves both better performance and better load balance compared with traditional auxiliary-loss-controlled load balancing strategies.

Lean Wang, Huazuo Gao, Chenggang Zhao, Xu Sun, Damai Dai• 2024

Related benchmarks

TaskDatasetResultRank
Downstream Performance EvaluationCORE
CORE Score18.031
17
Language ModelingFineWeb-Edu 100B (val)
CE Loss2.898
13
Language ModelingMini-GPT-OSS (val)
Validation Loss3.346
2
Language ModelingDS Mini Lite V2 (val)
Validation Loss3.057
2
Language ModelingMini-Qwen3 (val)
Validation Loss2.74
2
Showing 5 of 5 rows

Other info

Follow for update