Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style

About

Reward models are critical in techniques like Reinforcement Learning from Human Feedback (RLHF) and Inference Scaling Laws, where they guide language model alignment and select optimal responses. Despite their importance, existing reward model benchmarks often evaluate models by asking them to distinguish between responses generated by models of varying power. However, this approach fails to assess reward models on subtle but critical content changes and variations in style, resulting in a low correlation with policy model performance. To this end, we introduce RM-Bench, a novel benchmark designed to evaluate reward models based on their sensitivity to subtle content differences and resistance to style biases. Extensive experiments demonstrate that RM-Bench strongly correlates with policy model performance, making it a reliable reference for selecting reward models to align language models effectively. We evaluate nearly 40 reward models on RM-Bench. Our results reveal that even state-of-the-art models achieve an average performance of only 46.6%, which falls short of random-level accuracy (50%) when faced with style bias interference. These findings highlight the significant room for improvement in current reward models. Related code and data are available at https://github.com/THU-KEG/RM-Bench.

Yantao Liu, Zijun Yao, Rui Min, Yixin Cao, Lei Hou, Juanzi Li• 2024

Related benchmarks

TaskDatasetResultRank
Reward ModelingReward Bench Math
EF0.305
52
Reward ModelingRM Bench Code
EF0.154
52
Reward Model Suitability AuditRM-Bench Chat
EF0.313
26
Reward ModelingReward Bench safety subset prompt perturbations 2
EF-0.18
26
Reward ModelingReward Bench safety subset response perturbations 2
LE Score-0.629
26
Reward Modeling Suitability EvaluationRM Bench Safety-accept
EF0.698
26
Reward Modeling Suitability EvaluationRM Bench Math
EF-0.077
26
Reward ModelingRewardBench latest (full)
Average Score92.7
11
Showing 8 of 8 rows

Other info

Follow for update