Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Reward Reasoning Model

About

Reward models play a critical role in guiding large language models toward outputs that align with human expectations. However, an open challenge remains in effectively utilizing test-time compute to enhance reward model performance. In this work, we introduce Reward Reasoning Models (RRMs), which are specifically designed to execute a deliberate reasoning process before generating final rewards. Through chain-of-thought reasoning, RRMs leverage additional test-time compute for complex queries where appropriate rewards are not immediately apparent. To develop RRMs, we implement a reinforcement learning framework that fosters self-evolved reward reasoning capabilities without requiring explicit reasoning traces as training data. Experimental results demonstrate that RRMs achieve superior performance on reward modeling benchmarks across diverse domains. Notably, we show that RRMs can adaptively exploit test-time compute to further improve reward accuracy. The pretrained reward reasoning models are available at https://huggingface.co/Reward-Reasoning.

Jiaxin Guo, Zewen Chi, Li Dong, Qingxiu Dong, Xun Wu, Shaohan Huang, Furu Wei• 2025

Related benchmarks

TaskDatasetResultRank
Reward ModelingAggregate of 7 benchmarks (HelpSteer3, Reward Bench V2, SCAN-HPD, HREF, LitBench, WQ_Arena, WPB)
Overall Accuracy70.09
45
Reward ModelingJudgeBench (test)
Overall75.1
40
Reward ModelingRM-Bench (test)
Overall Score82.8
39
Reward ModelingHelpSteer 3
Accuracy79.42
39
Reward ModelingPPE Correctness (test)
PPE Corr67.9
26
Reward ModelingRewardBench (test)
RWBench0.912
25
Reward ModelingWPB
Accuracy62.83
22
Reward ModelingHREF
Accuracy72.73
22
Reward ModelingReward Bench V2
Accuracy73.4
22
Reward ModelingSCAN HPD
Accuracy76.04
22
Showing 10 of 14 rows

Other info

Follow for update