Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Soft Actor-Critic Algorithms and Applications

About

Model-free deep reinforcement learning (RL) algorithms have been successfully applied to a range of challenging sequential decision making and control tasks. However, these methods typically suffer from two major challenges: high sample complexity and brittleness to hyperparameters. Both of these challenges limit the applicability of such methods to real-world domains. In this paper, we describe Soft Actor-Critic (SAC), our recently introduced off-policy actor-critic algorithm based on the maximum entropy RL framework. In this framework, the actor aims to simultaneously maximize expected return and entropy. That is, to succeed at the task while acting as randomly as possible. We extend SAC to incorporate a number of modifications that accelerate training and improve stability with respect to the hyperparameters, including a constrained formulation that automatically tunes the temperature hyperparameter. We systematically evaluate SAC on a range of benchmark tasks, as well as real-world challenging tasks such as locomotion for a quadrupedal robot and robotic manipulation with a dexterous hand. With these improvements, SAC achieves state-of-the-art performance, outperforming prior on-policy and off-policy methods in sample-efficiency and asymptotic performance. Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving similar performance across different random seeds. These results suggest that SAC is a promising candidate for learning in real-world robotics tasks.

Tuomas Haarnoja, Aurick Zhou, Kristian Hartikainen, George Tucker, Sehoon Ha, Jie Tan, Vikash Kumar, Henry Zhu, Abhishek Gupta, Pieter Abbeel, Sergey Levine• 2018

Related benchmarks

TaskDatasetResultRank
Continuous ControlDMControl 500k
Spin Score923
33
Continuous ControlDMControl 100k
DMControl: Finger Spin Score811
29
NavigationPointMaze
Success Rate120
21
Reinforcement LearningMuJoCo Half-Cheetah
Average Return1.33e+4
18
NavigationBottleneck
Success Rate0.00e+0
16
NavigationComplex
Success Rate0.00e+0
16
NavigationAntMaze
Success Rate0.00e+0
16
NavigationAntMaze Small
Success Rate0.00e+0
16
Autonomous DrivingCARLA (#HW)
Error Rate69
15
Visual Reinforcement LearningCARLA (#GP scenario)
ER38
15
Showing 10 of 59 rows

Other info

Follow for update