Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Behavior Regularized Offline Reinforcement Learning

About

In reinforcement learning (RL) research, it is common to assume access to direct online interactions with the environment. However in many real-world applications, access to the environment is limited to a fixed offline dataset of logged experience. In such settings, standard RL algorithms have been shown to diverge or otherwise yield poor performance. Accordingly, recent work has suggested a number of remedies to these issues. In this work, we introduce a general framework, behavior regularized actor critic (BRAC), to empirically evaluate recently proposed methods as well as a number of simple baselines across a variety of offline continuous control tasks. Surprisingly, we find that many of the technical complexities introduced in recent methods are unnecessary to achieve strong performance. Additional ablations provide insights into which design choices matter most in the offline RL setting.

Yifan Wu, George Tucker, Ofir Nachum• 2019

Related benchmarks

TaskDatasetResultRank
Offline Reinforcement LearningD4RL halfcheetah-medium-expert
Normalized Score52.3
155
Offline Reinforcement LearningD4RL hopper-medium-expert
Normalized Score7.9
153
Offline Reinforcement LearningD4RL walker2d-medium-expert
Normalized Score1.1
124
Offline Reinforcement LearningD4RL walker2d-random
Normalized Score80
93
Offline Reinforcement LearningD4RL halfcheetah-random
Normalized Score24.3
86
Offline Reinforcement LearningD4RL hopper-random
Normalized Score11.1
78
Offline Reinforcement LearningD4RL Gym walker2d (medium-replay)
Normalized Return47.9
68
Offline Reinforcement LearningD4RL Walker2d Medium v2
Normalized Return81.1
67
Offline Reinforcement LearningD4RL Gym halfcheetah-medium
Normalized Return51.9
60
Offline Reinforcement LearningD4RL halfcheetah v2 (medium-replay)
Normalized Score48.6
58
Showing 10 of 120 rows
...

Other info

Follow for update