Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Autonomy-of-Experts Models

About

Mixture-of-Experts (MoE) models mostly use a router to assign tokens to specific expert modules, activating only partial parameters and often outperforming dense models. We argue that the separation between the router's decision-making and the experts' execution is a critical yet overlooked issue, leading to suboptimal expert selection and ineffective learning. To address this, we propose Autonomy-of-Experts (AoE), a novel MoE paradigm in which experts autonomously select themselves to process inputs. AoE is based on the insight that an expert is aware of its own capacity to effectively process a token, an awareness reflected in the scale of its internal activations. In AoE, routers are removed; instead, experts pre-compute internal activations for inputs and are ranked based on their activation norms. Only the top-ranking experts proceed with the forward pass, while the others abort. The overhead of pre-computing activations is reduced through a low-rank weight factorization. This self-evaluating-then-partner-comparing approach ensures improved expert selection and effective learning. We pre-train language models having 700M up to 4B parameters, demonstrating that AoE outperforms traditional MoE models with comparable efficiency.

Ang Lv, Ruobing Xie, Yining Qian, Songhao Wu, Xingwu Sun, Zhanhui Kang, Di Wang, Rui Yan• 2025

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag--
1891
Commonsense ReasoningWinoGrande
Accuracy50.2
1085
Question AnsweringARC Challenge--
906
Question AnsweringOpenBookQA
Normalized Accuracy25
102
Language ModelingOpenWebText
Perplexity30
91
Question AnsweringARC Easy
Normalized Accuracy33.84
18
Commonsense ReasoningPIQA
Normalized Accuracy56.09
13
Natural Language UnderstandingGLUE
QQP Accuracy36.82
8
Showing 8 of 8 rows

Other info

Follow for update