Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Stochastic Latent Actor-Critic: Deep Reinforcement Learning with a Latent Variable Model

About

Deep reinforcement learning (RL) algorithms can use high-capacity deep networks to learn directly from image observations. However, these high-dimensional observation spaces present a number of challenges in practice, since the policy must now solve two problems: representation learning and task learning. In this work, we tackle these two problems separately, by explicitly learning latent representations that can accelerate reinforcement learning from images. We propose the stochastic latent actor-critic (SLAC) algorithm: a sample-efficient and high-performing RL algorithm for learning policies for complex continuous control tasks directly from high-dimensional image inputs. SLAC provides a novel and principled approach for unifying stochastic sequential models and RL into a single method, by learning a compact latent representation and then performing RL in the model's learned latent space. Our experimental evaluation demonstrates that our method outperforms both model-free and model-based alternatives in terms of final performance and sample efficiency, on a range of difficult image-based control tasks. Our code and videos of our results are available at our website.

Alex X. Lee, Anusha Nagabandi, Pieter Abbeel, Sergey Levine• 2019

Related benchmarks

TaskDatasetResultRank
Continuous ControlDMControl 500k
Spin Score673
33
Continuous ControlDMControl 100k
DMControl: Finger Spin Score693
29
Continuous ControlHopper
Average Reward2.22e+4
15
Offline Reinforcement LearningDMControl cheetah-run (expert)
Normalized Score8.92
12
Offline Reinforcement LearningDMControl walker-walk (expert)
Normalized Score11.71
12
Reinforcement LearningHalfCheetah Random--
10
Continuous ControlOpenAI Gym MuJoCo HalfCheetah POMDP (test)
Avg Return3.01e+3
8
Continuous ControlOpenAI Gym MuJoCo Pendulum POMDP (test)
Average Return167.3
8
Continuous ControlOpenAI Gym MuJoCo Ant POMDP (test)
Average Return1.13e+3
8
Continuous ControlOpenAI Gym MuJoCo Hopper POMDP (test)
Average Return739.3
8
Showing 10 of 65 rows

Other info

Follow for update