Trust Region Policy Optimization
About
We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.
John Schulman, Sergey Levine, Philipp Moritz, Michael I. Jordan, Pieter Abbeel• 2015
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Reinforcement Learning | Walker | Average Returns229.8 | 38 | |
| Reinforcement Learning | HalfCheetah v3 | Mean Reward4.79e+3 | 34 | |
| Quadruped | Quadruped | Return206.2 | 33 | |
| Reinforcement Learning | Humanoid | Zero-Shot Reward3.18e+3 | 30 | |
| Reinforcement Learning | Pendulum | Avg Episode Reward-145.5 | 26 | |
| Reinforcement Learning | Ant v3 | Average Final Return6.20e+3 | 26 | |
| Reinforcement Learning | Walker2d v3 | Average Final Return5.50e+3 | 26 | |
| Reinforcement Learning | Hopper v3 | Average Final Return3.47e+3 | 26 | |
| Reinforcement Learning | Humanoid v3 | Avg Final Return965 | 26 | |
| Continuous Control | MuJoCo HalfCheetah | Average Reward2.01e+3 | 25 |
Showing 10 of 102 rows
...