Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Responsive Safety in Reinforcement Learning by PID Lagrangian Methods

About

Lagrangian methods are widely used algorithms for constrained optimization problems, but their learning dynamics exhibit oscillations and overshoot which, when applied to safe reinforcement learning, leads to constraint-violating behavior during agent training. We address this shortcoming by proposing a novel Lagrange multiplier update method that utilizes derivatives of the constraint function. We take a controls perspective, wherein the traditional Lagrange multiplier update behaves as \emph{integral} control; our terms introduce \emph{proportional} and \emph{derivative} control, achieving favorable learning dynamics through damping and predictive measures. We apply our PID Lagrangian methods in deep RL, setting a new state of the art in Safety Gym, a safe RL benchmark. Lastly, we introduce a new method to ease controller tuning by providing invariance to the relative numerical scales of reward and cost. Our extensive experiments demonstrate improved performance and hyperparameter robustness, while our algorithms remain nearly as simple to derive and implement as the traditional Lagrangian approach.

Adam Stooke, Joshua Achiam, Pieter Abbeel• 2020

Related benchmarks

TaskDatasetResultRank
Robotic motion control (Circle task)Ball-Circle
Reward904
20
Robotic motion control (Circle task)Car-Circle
Reward197.3
20
Robotic motion control (Run task)Ball-Run
Reward1.21e+3
20
Robotic motion control (Run task)Car-Run
Reward881.6
20
FetchReachGymnasium Robotics
Reward3.68
16
Goal2Safety Gymnasium
Reward8.73
16
HalfCheetahMujoco
Reward5.21
16
Button2Safety Gymnasium
Reward1.3
16
Goal1Safety Gymnasium
Reward6.5
16
Button1Safety Gymnasium
Reward2.49
16
Showing 10 of 30 rows

Other info

Follow for update