Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Language Models are Multilingual Chain-of-Thought Reasoners

About

We evaluate the reasoning abilities of large language models in multilingual settings. We introduce the Multilingual Grade School Math (MGSM) benchmark, by manually translating 250 grade-school math problems from the GSM8K dataset (Cobbe et al., 2021) into ten typologically diverse languages. We find that the ability to solve MGSM problems via chain-of-thought prompting emerges with increasing model scale, and that models have strikingly strong multilingual reasoning abilities, even in underrepresented languages such as Bengali and Swahili. Finally, we show that the multilingual reasoning abilities of language models extend to other tasks such as commonsense reasoning and word-in-context semantic judgment. The MGSM benchmark is publicly available at https://github.com/google-research/url-nlp.

Freda Shi, Mirac Suzgun, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush Vosoughi, Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, Dipanjan Das, Jason Wei• 2022

Related benchmarks

TaskDatasetResultRank
Natural Language InferenceXNLI (test)
Average Accuracy75.1
167
Multilingual Mathematical ReasoningMGSM (test)
Accuracy60
57
Math ReasoningMSVAMP (test)
Average Accuracy69.9
45
Multilingual Mathematical ReasoningMGSM
Accuracy (Bn)48.4
36
Multilingual Mathematical ReasoningMGSM 1.0 (test)
Accuracy (ru)64.9
35
Multilingual Mathematical ReasoningMSVAMP
Accuracy (English)60.6
33
Causal ReasoningXCOPA (test)
Accuracy (id)94
13
Commonsense ReasoningX-CSQA (test)
Accuracy (Sw)36.5
8
Showing 8 of 8 rows

Other info

Follow for update