Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Constraints Penalized Q-learning for Safe Offline Reinforcement Learning

About

We study the problem of safe offline reinforcement learning (RL), the goal is to learn a policy that maximizes long-term reward while satisfying safety constraints given only offline data, without further interaction with the environment. This problem is more appealing for real world RL applications, in which data collection is costly or dangerous. Enforcing constraint satisfaction is non-trivial, especially in offline settings, as there is a potential large discrepancy between the policy distribution and the data distribution, causing errors in estimating the value of safety constraints. We show that na\"ive approaches that combine techniques from safe RL and offline RL can only learn sub-optimal solutions. We thus develop a simple yet effective algorithm, Constraints Penalized Q-Learning (CPQ), to solve the problem. Our method admits the use of data generated by mixed behavior policies. We present a theoretical analysis and demonstrate empirically that our approach can learn robustly across a variety of benchmark control tasks, outperforming several baselines.

Haoran Xu, Xianyuan Zhan, Xiangyu Zhu• 2021

Related benchmarks

TaskDatasetResultRank
Safe Reinforcement LearningBullet Safety Gym
Normalized Reward0.33
10
Safe Reinforcement LearningMetaDrive
Normalized Reward-0.06
10
BallCircleBullet-Safety-Gym OSRL
Reward0.73
9
BallRunBullet-Safety-Gym OSRL
Reward0.55
9
CarRunBullet-Safety-Gym OSRL
Reward0.94
9
DroneCircleBullet-Safety-Gym OSRL
Reward0.82
9
DroneRunBullet-Safety-Gym OSRL
Reward0.62
9
CarCircleBullet-Safety-Gym OSRL
Reward0.64
9
Constrained Offline Reinforcement LearningDSRL HalfCheetahVelocity
Normalized Return105
7
Constrained Offline Reinforcement LearningDSRL CarGoal1
Normalized Return0.79
7
Showing 10 of 27 rows

Other info

Follow for update