Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

CALM: Unleashing the Cross-Lingual Self-Aligning Ability of Language Model Question Answering

About

Large Language Models (LLMs) are pretrained on extensive multilingual corpora to acquire both language-specific cultural knowledge and general knowledge. Ideally, while LLMs should provide consistent responses to culture-independent questions across languages, we observe significant performance disparities. To address this, we explore the Cross-Lingual Self-Aligning ability of Language Models (CALM) to align knowledge across languages. Specifically, for a given question, we sample multiple responses across different languages and select the most self-consistent response as the target, leaving the remaining responses as negative examples. We then employ direct preference optimization (DPO) to align the model's knowledge across different languages. Evaluations on the MEDQA and X-CSQA datasets demonstrate CALM's effectiveness in enhancing cross-lingual knowledge question answering, both in zero-shot and retrieval-augmented settings. We also found that increasing the number of languages involved in CALM training leads to higher accuracy and consistency. We offer a qualitative analysis of how cross-lingual consistency can enhance knowledge alignment and explore the method's generalizability.

Yumeng Wang, Zhiyuan Fan, Qingyun Wang, May Fung, Heng Ji• 2025

Related benchmarks

TaskDatasetResultRank
Multilingual Language UnderstandingMMMLU
CLCall4.2
30
Showing 1 of 1 rows

Other info

Follow for update