Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Reward Modeling from Natural Language Human Feedback

About

Reinforcement Learning with Verifiable reward (RLVR) on preference data has become the mainstream approach for training Generative Reward Models (GRMs). Typically in pairwise rewarding tasks, GRMs generate reasoning chains ending with critiques and preference labels, and RLVR then relies on the correctness of the preference labels as the training reward. However, in this paper, we demonstrate that such binary classification tasks make GRMs susceptible to guessing correct outcomes without sound critiques. Consequently, these spurious successes introduce substantial noise into the reward signal, thereby impairing the effectiveness of reinforcement learning. To address this issue, we propose Reward Modeling from Natural Language Human Feedback (RM-NLHF), which leverages natural language feedback to obtain process reward signals, thereby mitigating the problem of limited solution space inherent in binary tasks. Specifically, we compute the similarity between GRM-generated and human critiques as the training reward, which provides more accurate reward signals than outcome-only supervision. Additionally, considering that human critiques are difficult to scale up, we introduce Meta Reward Model (MetaRM) which learns to predict process reward from datasets with human critiques and then generalizes to data without human critiques. Experiments on multiple benchmarks demonstrate that our method consistently outperforms state-of-the-art GRMs trained with outcome-only reward, confirming the superiority of integrating natural language over binary human feedback as supervision.

Zongqi Wang, Rui Wang, Yuchuan Wu, Yiyao Yu, Pinyi Zhang, Shaoning Sun, Yujiu Yang, Yongbin Li• 2026

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval+--
189
Reward ModelingAggregate of 7 benchmarks (HelpSteer3, Reward Bench V2, SCAN-HPD, HREF, LitBench, WQ_Arena, WPB)
Overall Accuracy72.96
45
Reward ModelingHelpSteer 3
Accuracy83.15
39
Reward ModelingLitBench
Accuracy74.92
22
Reward ModelingReward Bench V2
Accuracy78.67
22
Reward ModelingSCAN HPD
Accuracy78.88
22
Reward ModelingWQ Arena
Accuracy61.61
22
Reward ModelingHREF
Accuracy71.65
22
Reward ModelingWPB
Accuracy61.83
22
General Language Model EvaluationArena-Hard V2.0
Win Rate7.03
9
Showing 10 of 10 rows

Other info

Follow for update