IPO: Interior-point Policy Optimization under Constraints
About
In this paper, we study reinforcement learning (RL) algorithms to solve real-world decision problems with the objective of maximizing the long-term reward as well as satisfying cumulative constraints. We propose a novel first-order policy optimization method, Interior-point Policy Optimization (IPO), which augments the objective with logarithmic barrier functions, inspired by the interior-point method. Our proposed method is easy to implement with performance guarantees and can handle general types of cumulative multiconstraint settings. We conduct extensive evaluations to compare our approach with state-of-the-art baselines. Our algorithm outperforms the baseline algorithms, in terms of reward maximization and constraint satisfaction.
Yongshuai Liu, Jiaxin Ding, Xin Liu• 2019
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Constrained Reinforcement Learning | GRID | Episodic Reward229.4 | 8 | |
| Constrained Reinforcement Learning | Humanoid | Episodic Reward1.58e+3 | 8 | |
| Constrained Reinforcement Learning | PointCircle | Episodic Reward68.7 | 8 | |
| Constrained Reinforcement Learning | Bottleneck | Episodic Reward279.3 | 8 | |
| Constrained Reinforcement Learning | AntCircle | Episodic Reward149.3 | 8 | |
| Constrained Reinforcement Learning | PointReach | Episodic Reward49.1 | 8 | |
| Constrained Reinforcement Learning | AntReach | Episodic Reward45.2 | 8 | |
| Continuous Control | HalfCheetah-Velocity Safety-Gymnasium (test) | Reward1.82e+3 | 7 | |
| Safe Reinforcement Learning | Hopper-Velocity | Reward1.22e+3 | 7 | |
| Constrained Reinforcement Learning | Navigation | Episodic Reward164.1 | 5 |
Showing 10 of 10 rows