Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Scaling Reward Modeling without Human Supervision

About

Learning from feedback is an instrumental process for advancing the capabilities and safety of frontier models, yet its effectiveness is often constrained by cost and scalability. We present a pilot study that explores scaling reward models through unsupervised approaches. We operationalize reward-based scaling (RBS), in its simplest form, as preference learning over document prefixes and suffixes drawn from large-scale web corpora. Its advantage is demonstrated in various aspects: despite using no human annotations, training on 11M tokens of math-focused web data yields steady gains on RewardBench v1 and v2, and these improvements consistently transfer across diverse initialization backbones spanning model families and scales. Across models, our method improves RewardBench v2 accuracy by up to +7.7 points on average, with gains of up to +16.1 on in-domain math subsets and consistent improvements on out-of-domain safety and general subsets. When applied to best-of-N selection and policy optimization, these reward models substantially improve downstream math performance and match or exceed strong supervised reward model baselines of similar size. Overall, we demonstrate the feasibility and promise of training reward models without costly and potentially unreliable human annotations.

Jingxuan Fan, Yueying Li, Zhenting Qi, Dinghuai Zhang, Kiant\'e Brantley, Sham M. Kakade, Hanlin Zhang• 2026

Related benchmarks

TaskDatasetResultRank
Reward ModelingRewardBench v2
Accuracy57
72
Reward ModelingRewardBench v2 (test)
Average Score33.2
42
Reward ModelingRewardBench v1
Accuracy70
28
Reward ModelingRewardBench v2
Score57
4
Showing 4 of 4 rows

Other info

Follow for update