Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Safe Reinforcement Learning Using Robust Control Barrier Functions

About

Reinforcement Learning (RL) has been shown to be effective in many scenarios. However, it typically requires the exploration of a sufficiently large number of state-action pairs, some of which may be unsafe. Consequently, its application to safety-critical systems remains a challenge. An increasingly common approach to address safety involves the addition of a safety layer that projects the RL actions onto a safe set of actions. In turn, a difficulty for such frameworks is how to effectively couple RL with the safety layer to improve the learning performance. In this paper, we frame safety as a differentiable robust-control-barrier-function layer in a model-based RL framework. Moreover, we also propose an approach to modularly learn the underlying reward-driven task, independent of safety constraints. We demonstrate that this approach both ensures safety and effectively guides exploration during training in a range of experiments, including zero-shot transfer when the reward is learned in a modular way.

Yousef Emam, Gennaro Notomista, Paul Glotfelter, Zsolt Kira, Magnus Egerstedt• 2021

Related benchmarks

TaskDatasetResultRank
Safe Reinforcement LearningVehicle Avoidance Moving Obstacles
Verified Success Rate (50th Percentile)73
14
Safe Reinforcement LearningLane Following
Verified Rate (80)98.7
7
Safe Reinforcement Learning3D Quadrotor Fixed Obstacles
Verified-15 Count0.00e+0
7
Safe Reinforcement Learning2D Quadrotor Fixed Obstacles
Verified Count (50)0.00e+0
7
Showing 4 of 4 rows

Other info

Follow for update