Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Accelerated Policy Learning with Parallel Differentiable Simulation

About

Deep reinforcement learning can generate complex control policies, but requires large amounts of training data to work effectively. Recent work has attempted to address this issue by leveraging differentiable simulators. However, inherent problems such as local minima and exploding/vanishing numerical gradients prevent these methods from being generally applied to control tasks with complex contact-rich dynamics, such as humanoid locomotion in classical RL benchmarks. In this work we present a high-performance differentiable simulator and a new policy learning algorithm (SHAC) that can effectively leverage simulation gradients, even in the presence of non-smoothness. Our learning algorithm alleviates problems with local minima through a smooth critic function, avoids vanishing/exploding gradients through a truncated learning window, and allows many physical environments to be run in parallel. We evaluate our method on classical RL control tasks, and show substantial improvements in sample efficiency and wall-clock time over state-of-the-art RL and differentiable simulation-based algorithms. In addition, we demonstrate the scalability of our method by applying it to the challenging high-dimensional problem of muscle-actuated locomotion with a large action space, achieving a greater than 17x reduction in training time over the best-performing established RL algorithm.

Jie Xu, Viktor Makoviychuk, Yashraj Narang, Fabio Ramos, Wojciech Matusik, Animesh Garg, Miles Macklin• 2022

Related benchmarks

TaskDatasetResultRank
Function OptimizationAckley
Avg Max Reward-0.4821
12
Function OptimizationDejong 64
Avg Max Reward-9.28e-7
5
Function OptimizationDejong
Average Maximum Reward-1.42e-8
5
Function OptimizationAckley 64
Avg Max Reward-0.0089
5
6-DOF Helix Trajectory TrackingBlueROV2 Heavy Centre Locked Helix Experiment 1.0 (real-world deployment)
Positional Error X (m)0.049
4
Disturbance RejectionDisturbance rejection experiments
Positional Error X0.059
4
Simulator ThroughputPhysics-only Simulators
Train SPS4.10e+4
3
Showing 7 of 7 rows

Other info

Follow for update