Stochastic Latent Actor-Critic: Deep Reinforcement Learning with a Latent Variable Model
About
Deep reinforcement learning (RL) algorithms can use high-capacity deep networks to learn directly from image observations. However, these high-dimensional observation spaces present a number of challenges in practice, since the policy must now solve two problems: representation learning and task learning. In this work, we tackle these two problems separately, by explicitly learning latent representations that can accelerate reinforcement learning from images. We propose the stochastic latent actor-critic (SLAC) algorithm: a sample-efficient and high-performing RL algorithm for learning policies for complex continuous control tasks directly from high-dimensional image inputs. SLAC provides a novel and principled approach for unifying stochastic sequential models and RL into a single method, by learning a compact latent representation and then performing RL in the model's learned latent space. Our experimental evaluation demonstrates that our method outperforms both model-free and model-based alternatives in terms of final performance and sample efficiency, on a range of difficult image-based control tasks. Our code and videos of our results are available at our website.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Continuous Control | DMControl 500k | Spin Score673 | 33 | |
| Continuous Control | DMControl 100k | DMControl: Finger Spin Score693 | 29 | |
| Continuous Control | Hopper | Average Reward2.22e+4 | 15 | |
| Offline Reinforcement Learning | DMControl cheetah-run (expert) | Normalized Score8.92 | 12 | |
| Offline Reinforcement Learning | DMControl walker-walk (expert) | Normalized Score11.71 | 12 | |
| Reinforcement Learning | HalfCheetah Random | -- | 10 | |
| Continuous Control | OpenAI Gym MuJoCo HalfCheetah POMDP (test) | Avg Return3.01e+3 | 8 | |
| Continuous Control | OpenAI Gym MuJoCo Pendulum POMDP (test) | Average Return167.3 | 8 | |
| Continuous Control | OpenAI Gym MuJoCo Ant POMDP (test) | Average Return1.13e+3 | 8 | |
| Continuous Control | OpenAI Gym MuJoCo Hopper POMDP (test) | Average Return739.3 | 8 |