Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

RM-R1: Reward Modeling as Reasoning

About

Reward modeling is essential for aligning large language models with human preferences through reinforcement learning. To provide accurate reward signals, a reward model (RM) should stimulate deep thinking and conduct interpretable reasoning before assigning a score or a judgment. Inspired by recent advances of long chain-of-thought on reasoning-intensive tasks, we hypothesize and validate that integrating reasoning into reward modeling significantly enhances RM's interpretability and performance. We introduce a new class of generative reward models, Reasoning Reward Models (ReasRMs), which formulate reward modeling as a reasoning task. We propose a reasoning-oriented training pipeline and train a family of ReasRMs, RM-R1. RM-R1 features a chain-of-rubrics (CoR) mechanism -- self-generating sample-level chat rubrics or math/code solutions, and evaluating candidate responses against them. The training of RM-R1 consists of two key stages: (1) distillation of high-quality reasoning chains and (2) reinforcement learning with verifiable rewards. Empirically, our models achieve superior performance across three reward model benchmarks on average, outperforming much larger open-weight models (e.g., INF-ORM-Llama3.1-70B) and proprietary ones (e.g., GPT-4o) by up to 4.9%. Beyond final performance, we perform thorough analyses to understand the key ingredients of successful ReasRM training.

Xiusi Chen, Gaotang Li, Ziqi Wang, Bowen Jin, Cheng Qian, Yu Wang, Hongru Wang, Yu Zhang, Denghui Zhang, Tong Zhang, Hanghang Tong, Heng Ji• 2025

Related benchmarks

TaskDatasetResultRank
Reward ModelingRewardBench
Accuracy92.9
166
Reward ModelingRM-Bench
Accuracy83.9
125
Reward ModelingRMB
Accuracy73
120
Reward ModelingJudgeBench
Accuracy64.8
105
Reward ModelingRewardBench v1.0 (test)
Average Score0.929
89
Reward ModelingRewardBench Focus 2
Accuracy84.6
82
Reward ModelingRewardBench v2
Accuracy61.4
72
Reward ModelingRewardBench Precise IF 2
Accuracy36.9
70
Reward ModelingRM-Bench (test)
Overall Score83.9
63
Reward ModelingPPE-Preference
Accuracy65.6
60
Showing 10 of 41 rows

Other info

Follow for update