Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Offline Reinforcement Learning with Reverse Model-based Imagination

About

In offline reinforcement learning (offline RL), one of the main challenges is to deal with the distributional shift between the learning policy and the given dataset. To address this problem, recent offline RL methods attempt to introduce conservatism bias to encourage learning in high-confidence areas. Model-free approaches directly encode such bias into policy or value function learning using conservative regularizations or special network structures, but their constrained policy search limits the generalization beyond the offline dataset. Model-based approaches learn forward dynamics models with conservatism quantifications and then generate imaginary trajectories to extend the offline datasets. However, due to limited samples in offline datasets, conservatism quantifications often suffer from overgeneralization in out-of-support regions. The unreliable conservative measures will mislead forward model-based imaginations to undesired areas, leading to overaggressive behaviors. To encourage more conservatism, we propose a novel model-based offline RL framework, called Reverse Offline Model-based Imagination (ROMI). We learn a reverse dynamics model in conjunction with a novel reverse policy, which can generate rollouts leading to the target goal states within the offline dataset. These reverse imaginations provide informed data augmentation for model-free policy learning and enable conservative generalization beyond the offline dataset. ROMI can effectively combine with off-the-shelf model-free algorithms to enable model-based generalization with proper conservatism. Empirical results show that our method can generate more conservative behaviors and achieve state-of-the-art performance on offline RL benchmark tasks.

Jianhao Wang, Wenzhe Li, Haozhe Jiang, Guangxiang Zhu, Siyuan Li, Chongjie Zhang• 2021

Related benchmarks

TaskDatasetResultRank
Offline Reinforcement LearningD4RL halfcheetah-medium-expert
Normalized Score89.5
117
Offline Reinforcement LearningD4RL hopper-medium-expert
Normalized Score111.4
115
Offline Reinforcement LearningD4RL walker2d-medium-expert
Normalized Score110.7
86
Offline Reinforcement LearningD4RL walker2d-random
Normalized Score7.5
77
Offline Reinforcement LearningD4RL halfcheetah-random
Normalized Score24.5
70
Offline Reinforcement LearningD4RL hopper-random
Normalized Score30.2
62
Offline Reinforcement LearningD4RL Gym walker2d (medium-replay)
Normalized Return109.7
52
Offline Reinforcement LearningD4RL Gym halfcheetah-medium
Normalized Return49.1
44
Offline Reinforcement LearningD4RL Gym walker2d medium
Normalized Return84.3
42
Offline Reinforcement LearningD4RL antmaze-umaze (diverse)
Normalized Score63.6
40
Showing 10 of 39 rows

Other info

Code

Follow for update