Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Process Reinforcement through Implicit Rewards

About

Dense process rewards have proven a more effective alternative to the sparse outcome-level rewards in the inference-time scaling of large language models (LLMs), particularly in tasks requiring complex multi-step reasoning. While dense rewards also offer an appealing choice for the reinforcement learning (RL) of LLMs since their fine-grained rewards have the potential to address some inherent issues of outcome rewards, such as training efficiency and credit assignment, this potential remains largely unrealized. This can be primarily attributed to the challenges of training process reward models (PRMs) online, where collecting high-quality process labels is prohibitively expensive, making them particularly vulnerable to reward hacking. To address these challenges, we propose PRIME (Process Reinforcement through IMplicit rEwards), which enables online PRM updates using only policy rollouts and outcome labels through implict process rewards. PRIME combines well with various advantage functions and forgoes the dedicated reward model training phrase that existing approaches require, substantially reducing the development overhead. We demonstrate PRIME's effectiveness on competitional math and coding. Starting from Qwen2.5-Math-7B-Base, PRIME achieves a 15.1% average improvement across several key reasoning benchmarks over the SFT model. Notably, our resulting model, Eurus-2-7B-PRIME, surpasses Qwen2.5-Math-7B-Instruct on seven reasoning benchmarks with 10% of its training data.

Ganqu Cui, Lifan Yuan, Zefan Wang, Hanbin Wang, Yuchen Zhang, Jiacheng Chen, Wendi Li, Bingxiang He, Yuchen Fan, Tianyu Yu, Qixin Xu, Weize Chen, Jiarui Yuan, Huayu Chen, Kaiyan Zhang, Xingtai Lv, Shuo Wang, Yuan Yao, Xu Han, Hao Peng, Yu Cheng, Zhiyuan Liu, Maosong Sun, Bowen Zhou, Ning Ding• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMATH
Accuracy83.4
882
Mathematical ReasoningMATH
Accuracy80.6
535
Mathematical ReasoningMATH500 (test)--
514
Jailbreak AttackHarmBench
Attack Success Rate (ASR)93.67
487
Mathematical ReasoningMATH 500--
442
Mathematical ReasoningAIME 2024
Accuracy18.39
370
Mathematical ReasoningGSM8K
Accuracy (GSM8K)95.8
358
Mathematical ReasoningCollegeMATH
Accuracy41
276
General KnowledgeMMLU
MMLU General Knowledge Accuracy30.1
234
Mathematical ReasoningAIME 2025
Accuracy13.88
227
Showing 10 of 99 rows
...

Other info

Follow for update