Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Reward Model Ensembles Help Mitigate Overoptimization

About

Reinforcement learning from human feedback (RLHF) is a standard approach for fine-tuning large language models to follow instructions. As part of this process, learned reward models are used to approximately model human preferences. However, as imperfect representations of the "true" reward, these learned reward models are susceptible to overoptimization. Gao et al. (2023) studied this phenomenon in a synthetic human feedback setup with a significantly larger "gold" reward model acting as the true reward (instead of humans) and showed that overoptimization remains a persistent problem regardless of the size of the proxy reward model and training data used. Using a similar setup, we conduct a systematic study to evaluate the efficacy of using ensemble-based conservative optimization objectives, specifically worst-case optimization (WCO) and uncertainty-weighted optimization (UWO), for mitigating reward model overoptimization when using two optimization methods: (a) best-of-n sampling (BoN) (b) proximal policy optimization (PPO). We additionally extend the setup of Gao et al. (2023) to include 25% label noise to better mirror real-world conditions. Both with and without label noise, we find that conservative optimization practically eliminates overoptimization and improves performance by up to 70% for BoN sampling. For PPO, ensemble-based conservative optimization always reduces overoptimization and outperforms single reward model optimization. Moreover, combining it with a small KL penalty successfully prevents overoptimization at no performance cost. Overall, our results demonstrate that ensemble-based conservative optimization can effectively counter overoptimization.

Thomas Coste, Usman Anwar, Robert Kirk, David Krueger• 2023

Related benchmarks

TaskDatasetResultRank
Reward ModelingRewardBench
Avg Score69.3
118
Reward ModelingUnified Feedback (UF)
Accuracy76.9
40
Preference ClassificationAnthropic HH Harmless (test)
Accuracy58
22
Reward ModelingRewardBench unified-feedback (test)
Average Score76.6
20
Reward ModelingHHH-Alignment OOD (test)
Score72.2
8
Reward ModelingUnified-Feedback ID (test)
Reward Score69.9
8
Reward ModelingUnified-Feedback (ID)
Accuracy72.8
8
Reward ModelingHHH-Alignment (OOD)
Accuracy76.8
8
Reward ModelingMT-Bench OOD (test)
Score71.1
8
Preference ClassificationWebGPT comparisons (test)
Accuracy60.6
7
Showing 10 of 11 rows

Other info

Follow for update