Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Less, but Better: Efficient Multilingual Expansion for LLMs via Layer-wise Mixture-of-Experts

About

Continually expanding new languages for existing large language models (LLMs) is a promising yet challenging approach to building powerful multilingual LLMs. The biggest challenge is to make the model continuously learn new languages while preserving the proficient ability of old languages. To achieve this, recent work utilizes the Mixture-of-Experts (MoE) architecture to expand new languages by adding new experts and avoid catastrophic forgetting of old languages by routing corresponding tokens to the original model backbone (old experts). Although intuitive, this kind of method is parameter-costly when expanding new languages and still inevitably impacts the performance of old languages. To address these limitations, we analyze the language characteristics of different layers in LLMs and propose a layer-wise expert allocation algorithm (LayerMoE) to determine the appropriate number of new experts for each layer. Specifically, we find different layers in LLMs exhibit different representation similarities between languages and then utilize the similarity as the indicator to allocate experts for each layer, i.e., the higher similarity, the fewer experts. Additionally, to further mitigate the forgetting of old languages, we add a classifier in front of the router network on the layers with higher similarity to guide the routing of old language tokens. Experimental results show that our method outperforms the previous state-of-the-art baseline with 60% fewer experts in the single-expansion setting and with 33.3% fewer experts in the lifelong-expansion setting, demonstrating the effectiveness of our method.

Xue Zhang, Yunlong Liang, Fandong Meng, Songming Zhang, Yufeng Chen, Jinan Xu, Jie Zhou• 2025

Related benchmarks

TaskDatasetResultRank
General KnowledgeMMLU
MMLU General Knowledge Accuracy55.79
234
Reading ComprehensionBelebele EN
Accuracy75.33
22
Language UnderstandingMMLU EN
MMLU (en)56.05
21
Commonsense ReasoningHellaSwag EN
Accuracy76.5
14
Commonsense ReasoningHellaSwag EL
Accuracy52.58
8
General KnowledgeMMLU EL
MMLU EL (General Knowledge) Accuracy44.06
8
Reading ComprehensionBelebele EL
Accuracy0.6733
8
ReasoningARC EL
Accuracy (ARC EL)37.5
8
ReasoningARC EN
Accuracy49.32
8
Commonsense ReasoningARC Challenge EN
Accuracy49.91
6
Showing 10 of 18 rows

Other info

Follow for update