OPRIDE: Offline Preference-based Reinforcement Learning via In-Dataset Exploration
About
Preference-based reinforcement learning (PbRL) can help avoid sophisticated reward designs and align better with human intentions, showing great promise in various real-world applications. However, obtaining human feedback for preferences can be expensive and time-consuming, which forms a strong barrier for PbRL. In this work, we address the problem of low query efficiency in offline PbRL, pinpointing two primary reasons: inefficient exploration and overoptimization of learned reward functions. In response to these challenges, we propose a novel algorithm, \textbf{O}ffline \textbf{P}b\textbf{R}L via \textbf{I}n-\textbf{D}ataset \textbf{E}xploration (OPRIDE), designed to enhance the query efficiency of offline PbRL. OPRIDE consists of two key features: a principled exploration strategy that maximizes the informativeness of the queries and a discount scheduling mechanism aimed at mitigating overoptimization of the learned reward functions. Through empirical evaluations, we demonstrate that OPRIDE significantly outperforms prior methods, achieving strong performance with notably fewer queries. Moreover, we provide theoretical guarantees of the algorithm's efficiency. Experimental results across various locomotion, manipulation, and navigation tasks underscore the efficacy and versatility of our approach.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Reinforcement Learning | Atari Breakout | Mean Return256.7 | 23 | |
| Reinforcement Learning | Atari 2600 Qbert | Score1.35e+4 | 20 | |
| Reinforcement Learning | Atari Pong | Mean Episode Return17.8 | 19 | |
| HalfCheetah | D4RL Medium v0 | Normalized Score42.4 | 19 | |
| Robotic Manipulation | D4RL Kitchen-Mixed | Normalized Score49.8 | 14 | |
| Robotic Manipulation | D4RL Kitchen-Partial | Normalized Score38.7 | 14 | |
| Reinforcement Learning | Atari 2600 Seaquest | Average Score3.48e+3 | 12 | |
| Offline Reinforcement Learning | AntMaze | Success Rate (umaze)87.5 | 5 | |
| Reinforcement Learning | Atari Asterix | Score426.9 | 5 | |
| Robot Manipulation | Meta-World | Success Rate (lever-pull)51.8 | 5 |