Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Coupling Experts and Routers in Mixture-of-Experts via an Auxiliary Loss

About

Mixture-of-Experts (MoE) models lack explicit constraints to ensure the router's decisions align well with the experts' capabilities, which ultimately limits model performance. To address this, we propose expert-router coupling (ERC) loss, a lightweight auxiliary loss that tightly couples the router's decisions with expert capabilities. Our approach treats each expert's router embedding as a proxy token for the tokens assigned to that expert, and feeds perturbed router embeddings through the experts to obtain intermediate activations. The ERC loss enforces two constraints on these activations: (1) Each expert must exhibit higher activation for its own proxy token than for the proxy tokens of any other expert. (2) Each proxy token must elicit stronger activation from its corresponding expert than from any other expert. These constraints jointly ensure that each router embedding faithfully represents its corresponding expert's capability, while each expert specializes in processing the tokens actually routed to it. The ERC loss is computationally efficient, operating only on $n^2$ activations, where $n$ is the number of experts. This represents a fixed cost independent of batch size, unlike prior coupling methods that scale with the number of tokens (often millions per batch). Through pre-training MoE-LLMs ranging from 3B to 15B parameters and extensive analysis on trillions of tokens, we demonstrate the effectiveness of the ERC loss. Moreover, the ERC loss offers flexible control and quantitative tracking of expert specialization levels during training, providing valuable insights into MoEs.

Ang Lv, Jin Ma, Yiyuan Ma, Siyuan Qiao• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy45.8
983
Mathematical Problem SolvingMATH
Accuracy26.1
166
Multitask Language UnderstandingMMLU-Pro
Accuracy31.9
99
Massive Multitask Language UnderstandingMMLU
Accuracy64.6
31
Big-Bench Hard ReasoningBBH
Accuracy45.6
2
Comprehensive Chinese EvaluationC-Eval
Accuracy69
2
General Intelligence EvaluationAGI-Eval
Accuracy44.2
2
Question AnsweringTriviaQA
Accuracy49.1
2
Showing 8 of 8 rows

Other info

Follow for update