Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Save the Good Prefix: Precise Error Penalization via Process-Supervised RL to Enhance LLM Reasoning

About

Reinforcement learning (RL) has emerged as a powerful framework for improving the reasoning capabilities of large language models (LLMs). However, most existing RL approaches rely on sparse outcome rewards, which fail to credit correct intermediate steps in partially successful solutions. Process reward models (PRMs) offer fine-grained step-level supervision, but their scores are often noisy and difficult to evaluate. As a result, recent PRM benchmarks focus on a more objective capability: detecting the first incorrect step in a reasoning path. However, this evaluation target is misaligned with how PRMs are typically used in RL, where their step-wise scores are treated as raw rewards to maximize. To bridge this gap, we propose Verifiable Prefix Policy Optimization (VPPO), which uses PRMs only to localize the first error during RL. Given an incorrect rollout, VPPO partitions the trajectory into a verified correct prefix and an erroneous suffix based on the first error, rewarding the former while applying targeted penalties only after the detected mistake. This design yields stable, interpretable learning signals and improves credit assignment. Across multiple reasoning benchmarks, VPPO consistently outperforms sparse-reward RL and prior PRM-guided baselines on both Pass@1 and Pass@K.

Haolin Liu, Dian Yu, Sidi Lu, Yujun Zhou, Rui Liu, Zhenwen Liang, Haitao Mi, Chen-Yu Wei, Dong Yu• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMinerva
Pass@1 (Avg@16)41.7
32
Mathematical ReasoningAMC23
Avg@1674.5
29
Mathematical ReasoningHMMT Feb 2025--
23
Mathematical ReasoningAIME 2025
Average@1629.2
15
Mathematical ReasoningAIME 2024
Average@1631.8
15
Mathematical ReasoningOlympiadBench
Average@1660.6
15
Mathematical ReasoningHmmt feb-2024
Average@1619.6
15
Mathematical ReasoningMATH 500
Average@1688
15
Showing 8 of 8 rows

Other info

Follow for update