Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ReasonGRM: Enhancing Generative Reward Models through Large Reasoning Models

About

Generative Reward Models (GRMs) provide greater flexibility than scalar reward models in capturing human preferences, but their effectiveness is limited by poor reasoning capabilities. This often results in incomplete or overly speculative reasoning paths, leading to hallucinations or missing key information in complex tasks. We address this challenge with ReasonGRM, a three-stage generative reward modeling framework. In the first stage, Zero-RL is used to generate concise, outcome-directed reasoning paths that reduce the likelihood of critical omissions. In the second stage, we introduce a novel evaluation metric, $R^\star$, which scores reasoning paths based on their generation likelihood. This favors paths that reach correct answers with minimal exploration, helping to reduce hallucination-prone data during training. In the final stage, the model is further refined through reinforcement learning on challenging examples to enhance its preference discrimination capabilities. Experiments on three public benchmarks show that ReasonGRM achieves competitive or state-of-the-art performance, outperforming previous best GRMs by 1.8\% on average and surpassing proprietary models such as GPT-4o by up to 5.6\%. These results demonstrate the effectiveness of reasoning-aware training and highlight the importance of high-quality rationale selection for reliable preference modeling.

Bin Chen, Xinzge Gao, Chuanrui Hu, Penghang Yu, Hua Zhang, Bing-Kun Bao• 2025

Related benchmarks

TaskDatasetResultRank
Code GenerationLiveCodeBench
Average Score50.1
68
Reward ModelingHelpSteer 3--
39
General Instruction FollowingArena-Hard v2
Score55.9
23
Reward ModelingPPE-Preference
Accuracy65.7
20
Reward ModelingRewardBench 2
L-Acc89.2
20
General Instruction FollowingWildBench
Score88.1
19
General-purpose BehaviorMultiChallenge
Score55
7
Overall Language Model EvaluationAggregated Benchmarks STEM Code IF General
Average Score59.4
7
STEM ReasoningAIME 2025
Score64.8
7
STEM ReasoningAIME 2024
Score75.5
7
Showing 10 of 11 rows

Other info

Follow for update