Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Expert Threshold Routing for Autoregressive Language Modeling with Dynamic Computation Allocation and Load Balancing

About

Token-choice Mixture-of-Experts (TC-MoE) routes each token to a fixed number of experts, limiting dynamic computation allocation and requiring auxiliary losses to maintain load balance. We propose Expert Threshold (ET) routing, where each expert maintains an exponential moving average (EMA) threshold estimated from the global token distribution. At both training and inference, each token is independently routed to an expert if its score exceeds the expert's threshold, enabling dynamic computation allocation while achieving load balance without auxiliary losses. This fully causal mechanism eliminates dependence on other tokens in the batch, making it well-suited for autoregressive language modeling. In pretraining experiments scaling to 2.4B parameters on FineWeb-Edu, ET achieves 0.067 lower cross-entropy loss than TC-MoE, equivalent to reaching the same performance with 1.6$\times$ fewer tokens.

Hanchi Sun, Yixin Liu, Yonghui Wu, Lichao Sun• 2026

Related benchmarks

TaskDatasetResultRank
Downstream Performance EvaluationCORE
CORE Score19.876
17
Language ModelingFineWeb-Edu 100B (val)
CE Loss2.62
13
Comprehensive Optimization and Reasoning EvaluationCORE
CORE Score25.14
4
Showing 3 of 3 rows

Other info

GitHub

Follow for update