Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

HelpSteer2: Open-source dataset for training top-performing reward models

About

High-quality preference datasets are essential for training reward models that can effectively guide large language models (LLMs) in generating high-quality responses aligned with human preferences. As LLMs become stronger and better aligned, permissively licensed preference datasets, such as Open Assistant, HH-RLHF, and HelpSteer need to be updated to remain effective for reward modeling. Methods that distil preference data from proprietary LLMs such as GPT-4 have restrictions on commercial usage imposed by model providers. To improve upon both generated responses and attribute labeling quality, we release HelpSteer2, a permissively licensed preference dataset (CC-BY-4.0). Using a powerful internal base model trained on HelpSteer2, we are able to achieve the SOTA score (92.0%) on Reward-Bench's primary dataset, outperforming currently listed open and proprietary models, as of June 12th, 2024. Notably, HelpSteer2 consists of only ten thousand response pairs, an order of magnitude fewer than existing preference datasets (e.g., HH-RLHF), which makes it highly efficient for training reward models. Our extensive experiments demonstrate that reward models trained with HelpSteer2 are effective in aligning LLMs. In particular, we propose SteerLM 2.0, a model alignment approach that can effectively make use of the rich multi-attribute score predicted by our reward models. HelpSteer2 is available at https://huggingface.co/datasets/nvidia/HelpSteer2 and code is available at https://github.com/NVIDIA/NeMo-Aligner

Zhilin Wang, Yi Dong, Olivier Delalleau, Jiaqi Zeng, Gerald Shen, Daniel Egert, Jimmy J. Zhang, Makesh Narsimhan Sreedhar, Oleksii Kuchaiev• 2024

Related benchmarks

TaskDatasetResultRank
Reward ModelingRewardBench
Accuracy92
166
Reward ModelingRewardBench
Chat Score95.8
146
Reward ModelingRM-Bench
Accuracy72.2
125
Reward ModelingRMB
Accuracy69.9
120
Reward ModelingJudgeBench
Accuracy65.8
105
Reward ModelingRewardBench v2
Accuracy76.7
72
Reward ModelingAggregate of 7 benchmarks (HelpSteer3, Reward Bench V2, SCAN-HPD, HREF, LitBench, WQ_Arena, WPB)
Overall Accuracy70.5
45
Reward ModelingPPE Correctness
Accuracy60.8
33
Reward ModelingPPE-P
Accuracy59.3
23
Reward ModelingPPE Preference ZH
Accuracy68.7
19
Showing 10 of 12 rows

Other info

Code

Follow for update