Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Gained in Translation: Privileged Pairwise Judges Enhance Multilingual Reasoning

About

When asked a question in a language less seen in its training data, current reasoning large language models (RLMs) often exhibit dramatically lower performance than when asked the same question in English. In response, we introduce \texttt{SP3F} (Self-Play with Privileged Pairwise Feedback), a two-stage framework for enhancing multilingual reasoning without \textit{any} data in the target language(s). First, we supervise fine-tune (SFT) on translated versions of English question-answer pairs to raise base model correctness. Second, we perform RL with feedback from a pairwise judge in a self-play fashion, with the judge receiving the English reference response as \textit{privileged information}. Thus, even when none of the model's responses are completely correct, the privileged pairwise judge can still tell which response is better. End-to-end, \texttt{SP3F} greatly improves base model performance, even outperforming fully post-trained models on multiple math and non-math tasks with less than of the training data across the single-language, multilingual, and generalization to unseen language settings.

Lintang Sutawika, Gokul Swamy, Zhiwei Steven Wu, Graham Neubig• 2026

Related benchmarks

TaskDatasetResultRank
Multilingual Mathematical ReasoningMT Math100
Accuracy60.1
24
Multilingual Reading ComprehensionBelebele
Accuracy79.8
18
Multilingual Mathematical ReasoningMGSM 18 languages
Accuracy72.5
6
Multilingual Reasoning and General KnowledgeOverall (18 languages)
Accuracy61.91
6
Multilingual Reading ComprehensionBelebele 18 languages
Accuracy67.54
6
Multilingual General KnowledgeGlobal MMLU Lite (subset of 18 languages)
Accuracy50.76
6
Showing 6 of 6 rows

Other info

Follow for update