Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Solving math word problems with process- and outcome-based feedback

About

Recent work has shown that asking language models to generate reasoning steps improves performance on many reasoning tasks. When moving beyond prompting, this raises the question of how we should supervise such models: outcome-based approaches which supervise the final result, or process-based approaches which supervise the reasoning process itself? Differences between these approaches might naturally be expected not just in final-answer errors but also in reasoning errors, which can be difficult to detect and are problematic in many real-world domains such as education. We run the first comprehensive comparison between process- and outcome-based approaches trained on a natural language task, GSM8K. We find that pure outcome-based supervision produces similar final-answer error rates with less label supervision. However, for correct reasoning steps we find it necessary to use process-based supervision or supervision from learned reward models that emulate process-based feedback. In total, we improve the previous best results from 16.8% $\to$ 12.7% final-answer error and 14.0% $\to$ 3.4% reasoning error among final-answer-correct solutions.

Jonathan Uesato, Nate Kushman, Ramana Kumar, Francis Song, Noah Siegel, Lisa Wang, Antonia Creswell, Geoffrey Irving, Irina Higgins• 2022

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K (test)
Accuracy87.3
751
Code GenerationLiveCodeBench
Pass@179.4
89
Mathematical Problem SolvingMATH (test)
Accuracy50.3
25
Code GenerationLiveCodeBench Medium--
23
Code GenerationLiveCodeBench Hard
Pass@162.3
21
Code GenerationLiveCodeBench Easy
Pass@1100
6
Showing 6 of 6 rows

Other info

Follow for update