Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Constrained Policy Optimization

About

For many applications of reinforcement learning it can be more convenient to specify both a reward function and constraints, rather than trying to design behavior through the reward function. For example, systems that physically interact with or around humans should satisfy safety constraints. Recent advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015, Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in high-dimensional control, but do not consider the constrained setting. We propose Constrained Policy Optimization (CPO), the first general-purpose policy search algorithm for constrained reinforcement learning with guarantees for near-constraint satisfaction at each iteration. Our method allows us to train neural network policies for high-dimensional control while making guarantees about policy behavior all throughout training. Our guarantees are based on a new theoretical result, which is of independent interest: we prove a bound relating the expected returns of two policies to an average divergence between them. We demonstrate the effectiveness of our approach on simulated robot locomotion tasks where the agent must satisfy constraints motivated by safety.

Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel• 2017

Related benchmarks

TaskDatasetResultRank
FetchReachGymnasium Robotics
Reward4.8
16
Button2Safety Gymnasium
Reward7.73
16
Button1Safety Gymnasium
Reward6.22
16
Goal1Safety Gymnasium
Reward6.76
16
HalfCheetahMujoco
Reward6.57
16
Goal2Safety Gymnasium
Reward8.74
16
Safe Reinforcement LearningVehicle Avoidance Moving Obstacles
Verified Success Rate (50th Percentile)72.8
14
Hopper VelocitySafety Gymnasium level-2
Safe Reward880
12
Point GoalSafety Gymnasium level-2
Safe Reward-1.3
12
Car CircleSafety Gymnasium level-2
Safe Reward0.96
12
Showing 10 of 78 rows
...

Other info

Follow for update