Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation
About
In this work, we propose to apply trust region optimization to deep reinforcement learning using a recently proposed Kronecker-factored approximation to the curvature. We extend the framework of natural policy gradient and propose to optimize both the actor and the critic using Kronecker-factored approximate curvature (K-FAC) with trust region; hence we call our method Actor Critic using Kronecker-Factored Trust Region (ACKTR). To the best of our knowledge, this is the first scalable trust region natural gradient method for actor-critic methods. It is also a method that learns non-trivial tasks in continuous control as well as discrete control policies directly from raw pixel inputs. We tested our approach across discrete domains in Atari games as well as continuous domains in the MuJoCo environment. With the proposed methods, we are able to achieve higher rewards and a 2- to 3-fold improvement in sample efficiency on average, compared to previous state-of-the-art on-policy actor-critic methods. Code is available at https://github.com/openai/baselines
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Reinforcement Learning | Walker | Average Returns83.7 | 38 | |
| Reinforcement Learning | Humanoid | Zero-Shot Reward260.8 | 30 | |
| Reinforcement Learning | Halfcheetah | Average Return513 | 17 | |
| Reinforcement Learning | Hopper | Avg Episode Reward2.58e+3 | 15 | |
| Reinforcement Learning | Pendulum | Avg Episode Reward-201.6 | 15 | |
| Reinforcement Learning | MountainCar | Avg Episode Reward0.9379 | 14 | |
| Reinforcement Learning | BipedalWalker | Average Episode Reward309.6 | 10 | |
| Reinforcement Learning | LunarLander | Average Episode Reward271.5 | 10 | |
| Reinforcement Learning | Supply Chain Optimization Environment (test) | Max Reward19.1 | 10 | |
| Reinforcement Learning | Inverted Double Pendulum | Avg Episode Reward9.36e+3 | 10 |