Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Reward Constrained Policy Optimization

About

Solving tasks in Reinforcement Learning is no easy feat. As the goal of the agent is to maximize the accumulated reward, it often learns to exploit loopholes and misspecifications in the reward signal resulting in unwanted behavior. While constraints may solve this issue, there is no closed form solution for general constraints. In this work we present a novel multi-timescale approach for constrained policy optimization, called `Reward Constrained Policy Optimization' (RCPO), which uses an alternative penalty signal to guide the policy towards a constraint satisfying one. We prove the convergence of our approach and provide empirical evidence of its ability to train constraint satisfying policies.

Chen Tessler, Daniel J. Mankowitz, Shie Mannor• 2018

Related benchmarks

TaskDatasetResultRank
HalfCheetahMujoco
Reward9.48
16
Goal1Safety Gymnasium
Reward6.74
16
FetchReachGymnasium Robotics
Reward4.72
16
Button1Safety Gymnasium
Reward4.21
16
Button2Safety Gymnasium
Reward3.67
16
Goal2Safety Gymnasium
Reward8.28
16
Car CircleSafety Gymnasium level-2
Safe Reward10
12
Point GoalSafety Gymnasium level-2
Safe Reward-0.012
12
Car GoalSafety Gymnasium level-2
Safe Reward0.21
12
Point PushSafety Gymnasium level-2
Safe Reward-0.48
12
Showing 10 of 21 rows

Other info

Follow for update