Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Deep reinforcement learning from human preferences

About

For sophisticated reinforcement learning (RL) systems to interact usefully with real-world environments, we need to communicate complex goals to these systems. In this work, we explore goals defined in terms of (non-expert) human preferences between pairs of trajectory segments. We show that this approach can effectively solve complex RL tasks without access to the reward function, including Atari games and simulated robot locomotion, while providing feedback on less than one percent of our agent's interactions with the environment. This reduces the cost of human oversight far enough that it can be practically applied to state-of-the-art RL systems. To demonstrate the flexibility of our approach, we show that we can successfully train complex novel behaviors with about an hour of human time. These behaviors and environments are considerably more complex than any that have been previously learned from human feedback.

Paul Christiano, Jan Leike, Tom B. Brown, Miljan Martic, Shane Legg, Dario Amodei• 2017

Related benchmarks

TaskDatasetResultRank
Imitation LearningDataset-1
MAE8.35
13
Imitation LearningDataset 2
MAE3.7
13
Imitation LearningDataset 4
MAE9.7
13
Imitation LearningDataset 5
MAE14.3
13
Inverse Reinforcement LearningDataset-1
MSE104
13
Inverse Reinforcement LearningDataset 2
MSE24.54
13
Inverse Reinforcement LearningDataset 3
MSE125.7
13
Inverse Reinforcement LearningDataset 4
MSE124.1
13
Inverse Reinforcement LearningDataset 5
MSE284
13
Imitation LearningDataset 3
MAE9.76
13
Showing 10 of 19 rows

Other info

Follow for update