Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

LayAlign: Enhancing Multilingual Reasoning in Large Language Models via Layer-Wise Adaptive Fusion and Alignment Strategy

About

Despite being pretrained on multilingual corpora, large language models (LLMs) exhibit suboptimal performance on low-resource languages. Recent approaches have leveraged multilingual encoders alongside LLMs by introducing trainable parameters connecting the two models. However, these methods typically focus on the encoder's output, overlooking valuable information from other layers. We propose \aname (\mname), a framework that integrates representations from all encoder layers, coupled with the \attaname mechanism to enable layer-wise interaction between the LLM and the multilingual encoder. Extensive experiments on multilingual reasoning tasks, along with analyses of learned representations, show that our approach consistently outperforms existing baselines.

Zhiwen Ruan, Yixia Li, He Zhu, Longyue Wang, Weihua Luo, Kaifu Zhang, Yun Chen, Guanhua Chen• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMGSM (test)--
49
Mathematical ReasoningMGSM
Accuracy (Bn)1.6
30
Abstractive SummarizationXL-Sum (test)
Language Democratization24.42
20
Mathematical ReasoningMSVAMP
Average Accuracy40.6
20
Mathematical ReasoningAfriMGSM
Accuracy (Amharic)30
14
Showing 5 of 5 rows

Other info

Follow for update