Math-Shepherd: Verify and Reinforce LLMs Step-by-step without Human Annotations
About
In this paper, we present an innovative process-oriented math process reward model called \textbf{Math-Shepherd}, which assigns a reward score to each step of math problem solutions. The training of Math-Shepherd is achieved using automatically constructed process-wise supervision data, breaking the bottleneck of heavy reliance on manual annotation in existing work. We explore the effectiveness of Math-Shepherd in two scenarios: 1) \textit{Verification}: Math-Shepherd is utilized for reranking multiple outputs generated by Large Language Models (LLMs); 2) \textit{Reinforcement Learning}: Math-Shepherd is employed to reinforce LLMs with step-by-step Proximal Policy Optimization (PPO). With Math-Shepherd, a series of open-source LLMs demonstrates exceptional performance. For instance, the step-by-step PPO with Math-Shepherd significantly improves the accuracy of Mistral-7B (77.9\%$\to$84.1\% on GSM8K and 28.6\%$\to$33.0\% on MATH). The accuracy can be further enhanced to 89.1\% and 43.5\% on GSM8K and MATH with the verification of Math-Shepherd, respectively. We believe that automatic process supervision holds significant potential for the future evolution of LLMs.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Mathematical Reasoning | GSM8K (test) | Accuracy87.1 | 751 | |
| Mathematical Reasoning | MATH | Accuracy81.7 | 643 | |
| Mathematical Reasoning | MATH | Accuracy76.6 | 535 | |
| Mathematical Reasoning | MATH (test) | Overall Accuracy33 | 433 | |
| Mathematical Reasoning | MATH500 (test) | Accuracy55.8 | 381 | |
| Mathematical Reasoning | GSM8K | Accuracy (GSM8K)96.2 | 358 | |
| Mathematical Reasoning | AIME 25 | Accuracy87.5 | 201 | |
| Mathematical Reasoning | CollegeMATH | Accuracy45.5 | 161 | |
| Math Reasoning | GSM8K (test) | Accuracy93.3 | 155 | |
| Mathematical Reasoning | Olympiad Bench | Pass@1 Accuracy39.1 | 115 |