Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Skywork-Reward-V2: Scaling Preference Data Curation via Human-AI Synergy

About

Despite the critical role of reward models (RMs) in Reinforcement Learning from Human Feedback (RLHF), current state-of-the-art open RMs perform poorly on most existing evaluation benchmarks, failing to capture nuanced human preferences. We hypothesize that this brittleness stems primarily from limitations in preference datasets, which are often narrowly scoped, synthetically labeled, or lack rigorous quality control. To address these challenges, we present SynPref-40M, a large-scale preference dataset comprising 40 million preference pairs. To enable data curation at scale, we design a human-AI synergistic two-stage pipeline that leverages the complementary strengths of human annotation quality and AI scalability. In this pipeline, humans provide verified annotations, while LLMs perform automatic curation based on human guidance. Training on this preference mixture, we introduce Skywork-Reward-V2, a suite of eight reward models ranging from 0.6B to 8B parameters, trained on a carefully curated subset of 26 million preference pairs from SynPref-40M. We demonstrate that Skywork-Reward-V2 is versatile across a wide range of capabilities, including alignment with human preferences, objective correctness, safety, resistance to stylistic biases, and best-of-N scaling. These reward models achieve state-of-the-art performance across seven major reward model benchmarks, outperform generative reward models, and demonstrate strong downstream performance. Ablation studies confirm that effectiveness stems not only from data scale but also from high-quality curation. The Skywork-Reward-V2 series represents substantial progress in open reward models, demonstrating how human-AI curation synergy can unlock significantly higher data quality.

Chris Yuhao Liu, Liang Zeng, Yuzhen Xiao, Jujie He, Jiacai Liu, Chaojie Wang, Rui Yan, Wei Shen, Fuxiang Zhang, Jiacheng Xu, Yang Liu, Yahui Zhou• 2025

Related benchmarks

TaskDatasetResultRank
Instruction FollowingIFEval--
625
Instruction FollowingAlpacaEval 2.0--
507
General KnowledgeMMLU
MMLU General Knowledge Accuracy69.6
234
Mathematical Problem SolvingMATH
Accuracy52.4
229
Reward ModelingRewardBench
Accuracy97.8
166
Reward ModelingRewardBench--
146
Reward ModelingRM-Bench
Accuracy96
125
Reward ModelingRMB
Accuracy89.3
120
Reward ModelingJudgeBench
Accuracy83.4
105
Reward ModelingRewardBench v1.0 (test)
Average Score0.978
89
Showing 10 of 81 rows
...

Other info

Follow for update