Safe But Not Sorry: Reducing Over-Conservatism in Safety Critics via Uncertainty-Aware Modulation
About
Ensuring the safe exploration of reinforcement learning (RL) agents is critical for deployment in real-world systems. Yet existing approaches struggle to strike the right balance: methods that tightly enforce safety often cripple task performance, while those that prioritize reward leave safety constraints frequently violated, producing diffuse cost landscapes that flatten gradients and stall policy improvement. We introduce the Uncertain Safety Critic (USC), a novel approach that integrates uncertainty-aware modulation and refinement into critic training. By concentrating conservatism in uncertain and costly regions while preserving sharp gradients in safe areas, USC enables policies to achieve effective reward-safety trade-offs. Extensive experiments show that USC reduces safety violations by approximately 40% while maintaining competitive or higher rewards, and reduces the error between predicted and true cost gradients by approximately 83%, breaking the prevailing trade-off between safety and performance and paving the way for scalable safe RL.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Button1 | Safety Gymnasium | Reward7.65 | 16 | |
| Button2 | Safety Gymnasium | Reward9.59 | 16 | |
| Goal2 | Safety Gymnasium | Reward9.48 | 16 | |
| FetchReach | Gymnasium Robotics | Reward4.78 | 16 | |
| Goal1 | Safety Gymnasium | Reward7.19 | 16 | |
| HalfCheetah | Mujoco | Reward6.64 | 16 | |
| Button navigation | Safety Gymnasium Button1 v0 (test) | Success Rate100 | 8 | |
| Button navigation | Safety Gymnasium Button2 v0 (test) | Success Rate100 | 8 | |
| Robotic reaching | Gymnasium Robotics FetchReach v0 (test) | Success Rate100 | 8 | |
| Goal Achievement | Safety Gymnasium Goal1 v0 (test) | Success Rate100 | 8 |