Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Towards Better Understanding of Program-of-Thought Reasoning in Cross-Lingual and Multilingual Environments

About

Multi-step reasoning is essential for large language models (LLMs), yet multilingual performance remains challenging. While Chain-of-Thought (CoT) prompting improves reasoning, it struggles with non-English languages due to the entanglement of reasoning and execution. Program-of-Thought (PoT) prompting separates reasoning from execution, offering a promising alternative but shifting the challenge to generating programs from non-English questions. We propose a framework to evaluate PoT by separating multilingual reasoning from code execution to examine (i) the impact of fine-tuning on question-reasoning alignment and (ii) how reasoning quality affects answer correctness. Our findings demonstrate that PoT fine-tuning substantially enhances multilingual reasoning, outperforming CoT fine-tuned models. We further demonstrate a strong correlation between reasoning quality (measured through code quality) and answer accuracy, highlighting its potential as a test-time performance improvement heuristic.

Patomporn Payoungkhamdee, Pume Tuchinda, Jinheon Baek, Samuel Cahyawijaya, Can Udomcharoenchaikit, Potsawee Manakul, Peerat Limkonchotiwat, Ekapol Chuangsuwanich, Sarana Nutanong• 2025

Related benchmarks

TaskDatasetResultRank
Multilingual Mathematical ReasoningMGSM (test)
Accuracy75.6
57
Multilingual Mathematical ReasoningMGSM
Accuracy (Bn)46
36
Mathematical ReasoningMGSM (test)
Accuracy (MGSM)75.6
29
Showing 3 of 3 rows

Other info

Follow for update