Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Behavior Regularized Offline Reinforcement Learning

About

In reinforcement learning (RL) research, it is common to assume access to direct online interactions with the environment. However in many real-world applications, access to the environment is limited to a fixed offline dataset of logged experience. In such settings, standard RL algorithms have been shown to diverge or otherwise yield poor performance. Accordingly, recent work has suggested a number of remedies to these issues. In this work, we introduce a general framework, behavior regularized actor critic (BRAC), to empirically evaluate recently proposed methods as well as a number of simple baselines across a variety of offline continuous control tasks. Surprisingly, we find that many of the technical complexities introduced in recent methods are unnecessary to achieve strong performance. Additional ablations provide insights into which design choices matter most in the offline RL setting.

Yifan Wu, George Tucker, Ofir Nachum• 2019

Related benchmarks

TaskDatasetResultRank
Offline Reinforcement LearningD4RL halfcheetah-medium-expert
Normalized Score52.3
117
Offline Reinforcement LearningD4RL hopper-medium-expert
Normalized Score7.9
115
Offline Reinforcement LearningD4RL walker2d-medium-expert
Normalized Score1.1
86
Offline Reinforcement LearningD4RL walker2d-random
Normalized Score80
77
Offline Reinforcement LearningD4RL halfcheetah-random
Normalized Score24.3
70
Offline Reinforcement LearningD4RL Walker2d Medium v2
Normalized Return81.1
67
Offline Reinforcement LearningD4RL hopper-random
Normalized Score11.1
62
Offline Reinforcement LearningD4RL halfcheetah v2 (medium-replay)
Normalized Score48.6
58
Offline Reinforcement LearningD4RL hopper-expert v2
Normalized Score78.1
56
Offline Reinforcement LearningD4RL walker2d-expert v2
Normalized Score55.2
56
Showing 10 of 109 rows
...

Other info

Follow for update