Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Conservative Offline Distributional Reinforcement Learning

About

Many reinforcement learning (RL) problems in practice are offline, learning purely from observational data. A key challenge is how to ensure the learned policy is safe, which requires quantifying the risk associated with different actions. In the online setting, distributional RL algorithms do so by learning the distribution over returns (i.e., cumulative rewards) instead of the expected return; beyond quantifying risk, they have also been shown to learn better representations for planning. We propose Conservative Offline Distributional Actor Critic (CODAC), an offline RL algorithm suitable for both risk-neutral and risk-averse domains. CODAC adapts distributional RL to the offline setting by penalizing the predicted quantiles of the return for out-of-distribution actions. We prove that CODAC learns a conservative return distribution -- in particular, for finite MDPs, CODAC converges to an uniform lower bound on the quantiles of the return distribution; our proof relies on a novel analysis of the distributional Bellman operator. In our experiments, on two challenging robot navigation tasks, CODAC successfully learns risk-averse policies using offline data collected purely from risk-neutral agents. Furthermore, CODAC is state-of-the-art on the D4RL MuJoCo benchmark in terms of both expected and risk-sensitive performance.

Yecheng Jason Ma, Dinesh Jayaraman, Osbert Bastani• 2021

Related benchmarks

TaskDatasetResultRank
Offline Reinforcement Learningpuzzle-4x4-play OGBench 5 tasks v0
Average Success Rate20
28
Offline Reinforcement Learningscene-play OGBench 5 tasks v0
Average Success Rate55
26
Offline Reinforcement Learningcube-double-play OGBench 5 tasks v0
Average Success Rate61
19
Offline Reinforcement Learningpuzzle-3x3-play OGBench 5 tasks v0
Average Success Rate20
19
Singletask Offline Reinforcement Learning (State-based)OGBench State-based Singletask Offline v0
Success Rate80
10
Offline Reinforcement LearningOGBench cube-triple-play
Success Rate2
10
Offline Reinforcement LearningD4RL adroit (12 tasks)
Success Rate52
10
Offline Reinforcement LearningD4RL Cheetah Stochastic MuJoCo (Mixed)
Mean Return396.4
8
Offline Reinforcement LearningStochastic D4RL Hopper Medium MuJoCo
Mean Return1.01e+3
8
Offline Reinforcement LearningStochastic D4RL Hopper MuJoCo (Mixed)
Mean Return1.55e+3
8
Showing 10 of 16 rows

Other info

Code

Follow for update