Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

OPRIDE: Offline Preference-based Reinforcement Learning via In-Dataset Exploration

About

Preference-based reinforcement learning (PbRL) can help avoid sophisticated reward designs and align better with human intentions, showing great promise in various real-world applications. However, obtaining human feedback for preferences can be expensive and time-consuming, which forms a strong barrier for PbRL. In this work, we address the problem of low query efficiency in offline PbRL, pinpointing two primary reasons: inefficient exploration and overoptimization of learned reward functions. In response to these challenges, we propose a novel algorithm, \textbf{O}ffline \textbf{P}b\textbf{R}L via \textbf{I}n-\textbf{D}ataset \textbf{E}xploration (OPRIDE), designed to enhance the query efficiency of offline PbRL. OPRIDE consists of two key features: a principled exploration strategy that maximizes the informativeness of the queries and a discount scheduling mechanism aimed at mitigating overoptimization of the learned reward functions. Through empirical evaluations, we demonstrate that OPRIDE significantly outperforms prior methods, achieving strong performance with notably fewer queries. Moreover, we provide theoretical guarantees of the algorithm's efficiency. Experimental results across various locomotion, manipulation, and navigation tasks underscore the efficacy and versatility of our approach.

Yiqin Yang, Hao Hu, Yihuan Mao, Jin Zhang, Chengjie Wu, Yuhua Jiang, Xu Yang, Runpeng Xie, Yi Fan, Bo Liu, Yang Gao, Bo Xu, Chongjie Zhang• 2026

Related benchmarks

TaskDatasetResultRank
Reinforcement LearningAtari Breakout
Mean Return256.7
23
Reinforcement LearningAtari 2600 Qbert
Score1.35e+4
20
Reinforcement LearningAtari Pong
Mean Episode Return17.8
19
HalfCheetahD4RL Medium v0
Normalized Score42.4
19
Robotic ManipulationD4RL Kitchen-Mixed
Normalized Score49.8
14
Robotic ManipulationD4RL Kitchen-Partial
Normalized Score38.7
14
Reinforcement LearningAtari 2600 Seaquest
Average Score3.48e+3
12
Offline Reinforcement LearningAntMaze
Success Rate (umaze)87.5
5
Reinforcement LearningAtari Asterix
Score426.9
5
Robot ManipulationMeta-World
Success Rate (lever-pull)51.8
5
Showing 10 of 24 rows

Other info

Follow for update