Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

HelpSteer2-Preference: Complementing Ratings with Preferences

About

Reward models are critical for aligning models to follow instructions, and are typically trained following one of two popular paradigms: Bradley-Terry style or Regression style. However, there is a lack of evidence that either approach is better than the other, when adequately matched for data. This is primarily because these approaches require data collected in different (but incompatible) formats, meaning that adequately matched data is not available in existing public datasets. To tackle this problem, we release preference annotations (designed for Bradley-Terry training) to complement existing ratings (designed for Regression style training) in the HelpSteer2 dataset. To improve data interpretability, preference annotations are accompanied with human-written justifications. Using this data, we conduct the first head-to-head comparison of Bradley-Terry and Regression models when adequately matched for data. Based on insights derived from such a comparison, we propose a novel approach to combine Bradley-Terry and Regression reward modeling. A Llama-3.1-70B-Instruct model tuned with this approach scores 94.1 on RewardBench, emerging top of more than 140 reward models as of 1 Oct 2024. This reward model can then be used with REINFORCE algorithm (RLHF) to align an Instruct model to reach 85.0 on Arena Hard, which is No. 1 as of 1 Oct 2024. We open-source this dataset (CC-BY-4.0 license) at https://huggingface.co/datasets/nvidia/HelpSteer2#preferences-new -- 1-oct-2024 and openly release the trained Reward and Instruct models at https://huggingface.co/nvidia/Llama-3.1-Nemotron-70B-Reward and https://huggingface.co/nvidia/Llama-3.1-Nemotron-70B-Instruct

Zhilin Wang, Alexander Bukharin, Olivier Delalleau, Daniel Egert, Gerald Shen, Jiaqi Zeng, Oleksii Kuchaiev, Yi Dong• 2024

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningWinoGrande--
1085
Code GenerationHumanEval
Pass@140.85
1036
ReasoningBBH--
672
Instruction FollowingIFEval--
625
Code GenerationHumanEval+
Pass@122.93
383
Reward ModelingRewardBench
Accuracy93.9
166
KnowledgeMMLU
Accuracy59.85
136
Reward ModelingRM-Bench
Accuracy72.2
125
Reward ModelingRMB
Accuracy64.9
120
Reward ModelingJudgeBench
Accuracy65.8
105
Showing 10 of 68 rows

Other info

Follow for update