Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Pessimistic Bootstrapping for Uncertainty-Driven Offline Reinforcement Learning

About

Offline Reinforcement Learning (RL) aims to learn policies from previously collected datasets without exploring the environment. Directly applying off-policy algorithms to offline RL usually fails due to the extrapolation error caused by the out-of-distribution (OOD) actions. Previous methods tackle such problem by penalizing the Q-values of OOD actions or constraining the trained policy to be close to the behavior policy. Nevertheless, such methods typically prevent the generalization of value functions beyond the offline data and also lack precise characterization of OOD data. In this paper, we propose Pessimistic Bootstrapping for offline RL (PBRL), a purely uncertainty-driven offline algorithm without explicit policy constraints. Specifically, PBRL conducts uncertainty quantification via the disagreement of bootstrapped Q-functions, and performs pessimistic updates by penalizing the value function based on the estimated uncertainty. To tackle the extrapolating error, we further propose a novel OOD sampling method. We show that such OOD sampling and pessimistic bootstrapping yields provable uncertainty quantifier in linear MDPs, thus providing the theoretical underpinning for PBRL. Extensive experiments on D4RL benchmark show that PBRL has better performance compared to the state-of-the-art algorithms.

Chenjia Bai, Lingxiao Wang, Zhuoran Yang, Zhihong Deng, Animesh Garg, Peng Liu, Zhaoran Wang• 2022

Related benchmarks

TaskDatasetResultRank
Offline Reinforcement LearningD4RL walker2d-random
Normalized Score8.1
77
Offline Reinforcement LearningD4RL Walker2d Medium v2
Normalized Return89.6
67
Offline Reinforcement LearningD4RL halfcheetah v2 (medium-replay)
Normalized Score45.1
58
Offline Reinforcement LearningD4RL hopper-expert v2
Normalized Score110.5
56
Offline Reinforcement LearningD4RL walker2d-expert v2
Normalized Score108.3
56
Offline Reinforcement LearningD4RL halfcheetah-expert v2
Normalized Score92.4
56
Offline Reinforcement LearningD4RL Hopper-medium-replay v2
Normalized Return100.6
54
Offline Reinforcement LearningD4RL Gym walker2d (medium-replay)
Normalized Return77.7
52
Offline Reinforcement LearningD4RL Hopper-medium-expert v2
Normalized Return110.8
49
Offline Reinforcement LearningD4RL Gym halfcheetah-medium
Normalized Return57.9
44
Showing 10 of 45 rows

Other info

Follow for update