Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

PAPO: Stabilizing Rubric Integration Training via Decoupled Advantage Normalization

About

We propose Process-Aware Policy Optimization (PAPO), a method that integrates process-level evaluation into Group Relative Policy Optimization (GRPO) through decoupled advantage normalization, to address two limitations of existing reward designs. Outcome reward models (ORM) evaluate only final-answer correctness, treating all correct responses identically regardless of reasoning quality, and gradually lose the advantage signal as groups become uniformly correct. Process reward models (PRM) offer richer supervision, but directly using PRM scores causes reward hacking, where models exploit verbosity to inflate scores while accuracy collapses. PAPO resolves both by composing the advantage from an outcome component Aout, derived from ORM and normalized over all responses, and a process component Aproc, derived from a rubric-based PRM and normalized exclusively among correct responses. This decoupled design ensures that Aout anchors training on correctness while Aproc differentiates reasoning quality without distorting the outcome signal. Experiments across multiple model scales and six benchmarks demonstrate that PAPO consistently outperforms ORM, reaching 51.3% vs.\ 46.3% on OlympiadBench while continuing to improve as ORM plateaus and declines.

Zelin Tan, Zhouliang Yu, Bohan Lin, Zijie Geng, Hejia Geng, Yudong Zhang, Mulei Zhang, Yang Chen, Shuyue Hu, Zhenfei Yin, Chen Zhang, Lei Bai• 2026

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval
Accuracy@470
12
Competition MathematicsOlympiadBench
Accuracy (avg@4)61.1
12
Competition MathematicsAIME 2024
Accuracy (avg@4)34.5
12
Competition MathematicsAIME 2025
Accuracy (avg@4)30.7
12
Standard MathematicsMATH 500
Accuracy@487.4
12
STEM ReasoningGPQA Diamond
Accuracy (avg@4)55
12
Showing 6 of 6 rows

Other info

Follow for update