Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

High-Dimensional Continuous Control Using Generalized Advantage Estimation

About

Policy gradient methods are an appealing approach in reinforcement learning because they directly optimize the cumulative reward and can straightforwardly be used with nonlinear function approximators such as neural networks. The two main challenges are the large number of samples typically required, and the difficulty of obtaining stable and steady improvement despite the nonstationarity of the incoming data. We address the first challenge by using value functions to substantially reduce the variance of policy gradient estimates at the cost of some bias, with an exponentially-weighted estimator of the advantage function that is analogous to TD(lambda). We address the second challenge by using trust region optimization procedure for both the policy and the value function, which are represented by neural networks. Our approach yields strong empirical results on highly challenging 3D locomotion tasks, learning running gaits for bipedal and quadrupedal simulated robots, and learning a policy for getting the biped to stand up from starting out lying on the ground. In contrast to a body of prior work that uses hand-crafted policy representations, our neural network policies map directly from raw kinematics to joint torques. Our algorithm is fully model-free, and the amount of simulated experience required for the learning tasks on 3D bipeds corresponds to 1-2 weeks of real time.

John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, Pieter Abbeel• 2015

Related benchmarks

TaskDatasetResultRank
Game PlayingAtari 2600 (Arcade Learning Environment) v1 (test)
Alien Score1.17e+3
13
Reinforcement LearningAtari
Overall Score10
6
Reinforcement LearningMinAtar
Overall Score0.00e+0
4
Showing 3 of 3 rows

Other info

Follow for update