Soft Actor-Critic Algorithms and Applications
About
Model-free deep reinforcement learning (RL) algorithms have been successfully applied to a range of challenging sequential decision making and control tasks. However, these methods typically suffer from two major challenges: high sample complexity and brittleness to hyperparameters. Both of these challenges limit the applicability of such methods to real-world domains. In this paper, we describe Soft Actor-Critic (SAC), our recently introduced off-policy actor-critic algorithm based on the maximum entropy RL framework. In this framework, the actor aims to simultaneously maximize expected return and entropy. That is, to succeed at the task while acting as randomly as possible. We extend SAC to incorporate a number of modifications that accelerate training and improve stability with respect to the hyperparameters, including a constrained formulation that automatically tunes the temperature hyperparameter. We systematically evaluate SAC on a range of benchmark tasks, as well as real-world challenging tasks such as locomotion for a quadrupedal robot and robotic manipulation with a dexterous hand. With these improvements, SAC achieves state-of-the-art performance, outperforming prior on-policy and off-policy methods in sample-efficiency and asymptotic performance. Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving similar performance across different random seeds. These results suggest that SAC is a promising candidate for learning in real-world robotics tasks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Continuous Control | DMControl 500k | Spin Score923 | 33 | |
| Continuous Control | DMControl 100k | DMControl: Finger Spin Score811 | 29 | |
| Navigation | PointMaze | Success Rate120 | 21 | |
| Reinforcement Learning | MuJoCo Half-Cheetah | Average Return1.33e+4 | 18 | |
| Navigation | Bottleneck | Success Rate0.00e+0 | 16 | |
| Navigation | Complex | Success Rate0.00e+0 | 16 | |
| Navigation | AntMaze | Success Rate0.00e+0 | 16 | |
| Navigation | AntMaze Small | Success Rate0.00e+0 | 16 | |
| Autonomous Driving | CARLA (#HW) | Error Rate69 | 15 | |
| Visual Reinforcement Learning | CARLA (#GP scenario) | ER38 | 15 |