Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Distributional Preference Learning: Understanding and Accounting for Hidden Context in RLHF

About

In practice, preference learning from human feedback depends on incomplete data with hidden context. Hidden context refers to data that affects the feedback received, but which is not represented in the data used to train a preference model. This captures common issues of data collection, such as having human annotators with varied preferences, cognitive processes that result in seemingly irrational behavior, and combining data labeled according to different criteria. We prove that standard applications of preference learning, including reinforcement learning from human feedback (RLHF), implicitly aggregate over hidden contexts according to a well-known voting rule called Borda count. We show this can produce counter-intuitive results that are very different from other methods which implicitly aggregate via expected utility. Furthermore, our analysis formalizes the way that preference learning from users with diverse values tacitly implements a social choice function. A key implication of this result is that annotators have an incentive to misreport their preferences in order to influence the learned model, leading to vulnerabilities in the deployment of RLHF. As a step towards mitigating these problems, we introduce a class of methods called distributional preference learning (DPL). DPL methods estimate a distribution of possible score values for each alternative in order to better account for hidden context. Experimental results indicate that applying DPL to RLHF for LLM chatbots identifies hidden context in the data and significantly reduces subsequent jailbreak vulnerability. Our code and data are available at https://github.com/cassidylaidlaw/hidden-context

Anand Siththaranjan, Cassidy Laidlaw, Dylan Hadfield-Menell• 2023

Related benchmarks

TaskDatasetResultRank
Preference AlignmentUF-P 2
Accuracy62.74
20
Preference AlignmentUF-P-4
Accuracy (%)57.66
20
Defense RobustnessDirect Inquiry Vanilla
Keyword Match Rate100
16
Defense RobustnessGCG Attack
Keyword Match Rate99
16
Defense RobustnessRepE Attack
DSR Keyword Match Rate90
16
Defense RobustnessSoft Prompt Attack
DSR Keyword Success Rate96
16
Defense RobustnessSCAV Attack
DSR (Keyword)54
16
Defense RobustnessAutoDAN Attack
Keyword Success Rate97
16
Preference PredictionPets
Accuracy62.02
8
Showing 9 of 9 rows

Other info

Follow for update