Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MPO: Multilingual Safety Alignment via Reward Gap Optimization

About

Large language models (LLMs) have become increasingly central to AI applications worldwide, necessitating robust multilingual safety alignment to ensure secure deployment across diverse linguistic contexts. Existing preference learning methods for safety alignment, such as RLHF and DPO, are primarily monolingual and struggle with noisy multilingual data. To address these limitations, we introduce Multilingual reward gaP Optimization (MPO), a novel approach that leverages the well-aligned safety capabilities of the dominant language (English) to improve safety alignment across multiple languages. MPO directly minimizes the reward gap difference between the dominant language and target languages, effectively transferring safety capabilities while preserving the original strengths of the dominant language. Extensive experiments on three LLMs, LLaMA-3.1, Gemma-2 and Qwen2.5, validate MPO's efficacy in multilingual safety alignment without degrading general multilingual utility.

Weixiang Zhao, Yulin Hu, Yang Deng, Tongtong Wu, Wenxuan Zhang, Jiahe Guo, An Zhang, Yanyan Zhao, Bing Qin, Tat-Seng Chua, Ting Liu• 2025

Related benchmarks

TaskDatasetResultRank
Jailbreak attack success rateMultiJail
ASR (EN)2.22
18
Jailbreak attack success rateAdvBench-x
ASR (English)0.38
18
Multilingual SafetyMultiJail Out-of-Distribution Languages (test)
Safety Violation Rate (KO)2.22
10
Multilingual SafetyMultiJail In-Distribution Languages (test)
Safety Score (EN)6.35
10
Safety AlignmentPKU-SafeRLHF in-distribution (test)
Accuracy (EN)89.44
10
Showing 5 of 5 rows

Other info

Follow for update