Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Maximum Score Routing For Mixture-of-Experts

About

Routing networks in sparsely activated mixture-of-experts (MoE) dynamically allocate input tokens to top-k experts through differentiable sparse transformations, enabling scalable model capacity while preserving computational efficiency. Traditional MoE networks impose an expert capacity constraint to ensure GPU-friendly computation. However, this leads to token dropping when capacity is saturated and results in low hardware efficiency due to padding in underutilized experts. Removing the capacity constraint, in turn, compromises load balancing and computational efficiency. To address these issues, we propose Maximum Score Routing ($\mathbf{MaxScore}$), a novel MoE routing paradigm that models routing as a minimum-cost maximum-flow problem and integrates a SoftTopk operator. MaxScore resolves the fundamental limitations of iterative rerouting and optimal transport formulations, achieving lower training losses and higher evaluation scores at equivalent FLOPs compared to both constrained and unconstrained baselines. Implementation details and experimental configurations can be obtained from $\href{https://github.com/dongbw18/MaxScore.git}{MaxScore}$.

Bowen Dong, Yilong Fan, Yutao Sun, Zhenyu Li, Tengyu Pan, Xun Zhou, Jianyong Wang• 2025

Related benchmarks

TaskDatasetResultRank
Zero-shot Natural Language UnderstandingStandard Zero-shot NLU Suite (ARC-challenge, ARC-easy, BoolQ, HellaSwag, LAMBADA, PIQA, RACE, SciQ, Record, OBQA)
ARC Challenge21.22
18
Zero-shot Natural Language UnderstandingLM-Evaluation-Harness ARC, BoolQ, HellaSwag, LAMBADA, PIQA, RACE, SciQ, Record, OBQA
ARC Challenge20.9
13
Showing 2 of 2 rows

Other info

Code

Follow for update