IPO: Interior-point Policy Optimization under Constraints
About
In this paper, we study reinforcement learning (RL) algorithms to solve real-world decision problems with the objective of maximizing the long-term reward as well as satisfying cumulative constraints. We propose a novel first-order policy optimization method, Interior-point Policy Optimization (IPO), which augments the objective with logarithmic barrier functions, inspired by the interior-point method. Our proposed method is easy to implement with performance guarantees and can handle general types of cumulative multiconstraint settings. We conduct extensive evaluations to compare our approach with state-of-the-art baselines. Our algorithm outperforms the baseline algorithms, in terms of reward maximization and constraint satisfaction.
Yongshuai Liu, Jiaxin Ding, Xin Liu• 2019
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Fair Order Matching | DeFi crypto-asset LOB BTC, ETH, SOL (500,000 held-out steps) | MQS0.825 | 21 | |
| Fair Order Matching | LOBSTER NASDAQ | Spread1.45 | 17 | |
| Constrained Reinforcement Learning | GRID | Episodic Reward229.4 | 8 | |
| Constrained Reinforcement Learning | Humanoid | Episodic Reward1.58e+3 | 8 | |
| Constrained Reinforcement Learning | PointCircle | Episodic Reward68.7 | 8 | |
| Constrained Reinforcement Learning | Bottleneck | Episodic Reward279.3 | 8 | |
| Constrained Reinforcement Learning | AntCircle | Episodic Reward149.3 | 8 | |
| Constrained Reinforcement Learning | PointReach | Episodic Reward49.1 | 8 | |
| Constrained Reinforcement Learning | AntReach | Episodic Reward45.2 | 8 | |
| Continuous Control | HalfCheetah-Velocity Safety-Gymnasium (test) | Reward1.82e+3 | 7 |
Showing 10 of 15 rows