Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Enhance the Safety in Reinforcement Learning by ADRC Lagrangian Methods

About

Safe reinforcement learning (Safe RL) seeks to maximize rewards while satisfying safety constraints, typically addressed through Lagrangian-based methods. However, existing approaches, including PID and classical Lagrangian methods, suffer from oscillations and frequent safety violations due to parameter sensitivity and inherent phase lag. To address these limitations, we propose ADRC-Lagrangian methods that leverage Active Disturbance Rejection Control (ADRC) for enhanced robustness and reduced oscillations. Our unified framework encompasses classical and PID Lagrangian methods as special cases while significantly improving safety performance. Extensive experiments demonstrate that our approach reduces safety violations by up to 74%, constraint violation magnitudes by 89%, and average costs by 67\%, establishing superior effectiveness for Safe RL in complex environments.

Mingxu Zhang, Huicheng Zhang, Jiaming Ji, Yaodong Yang, Ying Sun• 2026

Related benchmarks

TaskDatasetResultRank
Safe Reinforcement LearningCarButton (test)
Violation Rate50.16
12
Safe Reinforcement LearningCarCircle (test)
Violation Rate17.71
12
Safe Reinforcement LearningCarGoal
Violation Rate29.12
12
Safe Reinforcement LearningCarPush
Violation Rate34.75
12
Safe Reinforcement LearningRacecarCircle (test)
Violation Rate0.2364
12
Safe Reinforcement LearningRacecarGoal
Violation Rate34.03
12
Safe Reinforcement LearningRacecarPush
Violation Rate26.06
12
Safe Reinforcement LearningAntButton (test)
Violation Rate0.00e+0
12
Safe Reinforcement LearningAntCircle (test)
Violation Rate0.00e+0
12
Safe Reinforcement LearningRacecarButton (test)
Violation Rate80.06
12
Showing 10 of 17 rows

Other info

Follow for update