Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Off-Policy Deep Reinforcement Learning without Exploration

About

Many practical applications of reinforcement learning constrain agents to learn from a fixed batch of data which has already been gathered, without offering further possibility for data collection. In this paper, we demonstrate that due to errors introduced by extrapolation, standard off-policy deep reinforcement learning algorithms, such as DQN and DDPG, are incapable of learning with data uncorrelated to the distribution under the current policy, making them ineffective for this fixed batch setting. We introduce a novel class of off-policy algorithms, batch-constrained reinforcement learning, which restricts the action space in order to force the agent towards behaving close to on-policy with respect to a subset of the given data. We present the first continuous control deep reinforcement learning algorithm which can learn effectively from arbitrary, fixed batch data, and empirically demonstrate the quality of its behavior in several tasks.

Scott Fujimoto, David Meger, Doina Precup• 2018

Related benchmarks

TaskDatasetResultRank
Offline Reinforcement LearningD4RL halfcheetah-medium-expert
Normalized Score91
117
Offline Reinforcement LearningD4RL hopper-medium-expert
Normalized Score110.9
115
Auto-biddingAuctionNet
Score354.5
90
Offline Reinforcement LearningD4RL walker2d-medium-expert
Normalized Score110.7
86
Offline Reinforcement LearningD4RL walker2d-random
Normalized Score4.9
77
Offline Reinforcement LearningD4RL halfcheetah-random
Normalized Score2.3
70
Offline Reinforcement LearningD4RL Walker2d Medium v2
Normalized Return47.7
67
Offline Reinforcement LearningD4RL hopper-random
Normalized Score10.6
62
Offline Reinforcement LearningD4RL halfcheetah v2 (medium-replay)
Normalized Score39
58
Offline Reinforcement LearningD4RL walker2d-expert v2
Normalized Score110.4
56
Showing 10 of 170 rows
...

Other info

Follow for update