Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Double Check Your State Before Trusting It: Confidence-Aware Bidirectional Offline Model-Based Imagination

About

The learned policy of model-free offline reinforcement learning (RL) methods is often constrained to stay within the support of datasets to avoid possible dangerous out-of-distribution actions or states, making it challenging to handle out-of-support region. Model-based RL methods offer a richer dataset and benefit generalization by generating imaginary trajectories with either trained forward or reverse dynamics model. However, the imagined transitions may be inaccurate, thus downgrading the performance of the underlying offline RL method. In this paper, we propose to augment the offline dataset by using trained bidirectional dynamics models and rollout policies with double check. We introduce conservatism by trusting samples that the forward model and backward model agree on. Our method, confidence-aware bidirectional offline model-based imagination, generates reliable samples and can be combined with any model-free offline RL method. Experimental results on the D4RL benchmarks demonstrate that our method significantly boosts the performance of existing model-free offline RL algorithms and achieves competitive or better scores against baseline methods.

Jiafei Lyu, Xiu Li, Zongqing Lu• 2022

Related benchmarks

TaskDatasetResultRank
Offline Reinforcement LearningD4RL halfcheetah-medium-expert
Normalized Score107.6
155
Offline Reinforcement LearningD4RL hopper-medium-expert
Normalized Score112.4
153
Offline Reinforcement LearningD4RL walker2d-medium-expert
Normalized Score108.6
124
Offline Reinforcement LearningD4RL Medium HalfCheetah
Normalized Score45.1
97
Offline Reinforcement LearningD4RL Medium-Replay Hopper
Normalized Score31.3
97
Offline Reinforcement LearningD4RL Medium Walker2d
Normalized Score82
96
Offline Reinforcement LearningD4RL walker2d-random
Normalized Score6.4
93
Offline Reinforcement LearningD4RL halfcheetah-random
Normalized Score15.1
86
Offline Reinforcement LearningD4RL Medium-Replay HalfCheetah
Normalized Score44.4
84
Offline Reinforcement LearningD4RL hopper-random
Normalized Score11.9
78
Showing 10 of 44 rows

Other info

Follow for update