Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Inverse Preference Learning: Preference-based RL without a Reward Function

About

Reward functions are difficult to design and often hard to align with human intent. Preference-based Reinforcement Learning (RL) algorithms address these problems by learning reward functions from human feedback. However, the majority of preference-based RL methods na\"ively combine supervised reward models with off-the-shelf RL algorithms. Contemporary approaches have sought to improve performance and query complexity by using larger and more complex reward architectures such as transformers. Instead of using highly complex architectures, we develop a new and parameter-efficient algorithm, Inverse Preference Learning (IPL), specifically designed for learning from offline preference data. Our key insight is that for a fixed policy, the $Q$-function encodes all information about the reward function, effectively making them interchangeable. Using this insight, we completely eliminate the need for a learned reward function. Our resulting algorithm is simpler and more parameter-efficient. Across a suite of continuous control and robotics benchmarks, IPL attains competitive performance compared to more complex approaches that leverage transformer-based and non-Markovian reward functions while having fewer algorithmic hyperparameters and learned network parameters. Our code is publicly released.

Joey Hejna, Dorsa Sadigh• 2023

Related benchmarks

TaskDatasetResultRank
Offline Reinforcement LearningD4RL hopper-medium-expert
Normalized Score74.52
115
Offline Reinforcement LearningD4RL walker2d-medium-expert
Normalized Score108.5
86
Offline Reinforcement LearningD4RL Medium-Replay Hopper
Normalized Score73.57
72
Offline Reinforcement LearningD4RL Medium-Replay Walker2d
Normalized Score59.92
34
Offline Reinforcement LearningRobomimic Can multi-human
Avg Normalized Score57.6
7
Offline Reinforcement LearningRobomimic Lift (proficient-human)
Avg Normalized Score97.6
7
Offline Reinforcement LearningRobomimic Can (proficient-human)
Avg Normalized Score74.8
7
Offline Reinforcement LearningRobomimic Lift multi-human
Avg Normalized Score87.2
7
Showing 8 of 8 rows

Other info

Code

Follow for update