Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

NeuronMoE: Neuron-Guided Mixture-of-Experts for Efficient Multilingual LLM Extension

About

Extending large language models to low-resource languages is essential for global accessibility, but training separate models per language is prohibitively expensive. Mixture-of-Experts (MoE) architectures address this by adding sparse language-specific parameters, but determining how many experts each layer needs remains an open question. Current approaches allocate experts based on layer-level similarity, yet language processing exhibits fine-grained specialization at individual neurons. We propose $\textbf{NeuronMoE}$, a method that analyzes language-specific neurons across all transformer components to guide expert allocation per layer based on empirically measured cross-lingual neuron diversity. Applied to Llama-3.2-3B for low-resource languages (Greek, Turkish, and Hungarian), this approach achieves approximately 40% average parameter reduction while matching the performance of the LayerMoE baseline. We find that low-resource language experts independently develop neuron specialization patterns mirroring the high-resource language, which are concentrated in early and late layers. This reveals potential universal architectural principles in how multilingual models organize linguistic knowledge.

Rongzhi Li, Hitomi Yanaka• 2026

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy76.01
1891
Multitask Language UnderstandingMMLU
Accuracy55.28
413
General KnowledgeMMLU
MMLU General Knowledge Accuracy56.89
234
ReasoningARC
Accuracy48.72
94
Reading ComprehensionBelebele
Accuracy75.33
39
Reading ComprehensionBelebele EN
Accuracy75.33
22
Language UnderstandingMMLU EN
MMLU (en)56.21
21
Commonsense ReasoningHellaSwag EN
Accuracy76.53
14
General KnowledgeMMLU EL
MMLU EL (General Knowledge) Accuracy43.95
8
ReasoningARC EN
Accuracy50.17
8
Showing 10 of 22 rows

Other info

Follow for update