Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Offline Reinforcement Learning from Images with Latent Space Models

About

Offline reinforcement learning (RL) refers to the problem of learning policies from a static dataset of environment interactions. Offline RL enables extensive use and re-use of historical datasets, while also alleviating safety concerns associated with online exploration, thereby expanding the real-world applicability of RL. Most prior work in offline RL has focused on tasks with compact state representations. However, the ability to learn directly from rich observation spaces like images is critical for real-world applications such as robotics. In this work, we build on recent advances in model-based algorithms for offline RL, and extend them to high-dimensional visual observation spaces. Model-based offline RL algorithms have achieved state of the art results in state based tasks and have strong theoretical guarantees. However, they rely crucially on the ability to quantify uncertainty in the model predictions, which is particularly challenging with image observations. To overcome this challenge, we propose to learn a latent-state dynamics model, and represent the uncertainty in the latent space. Our approach is both tractable in practice and corresponds to maximizing a lower bound of the ELBO in the unknown POMDP. In experiments on a range of challenging image-based locomotion and manipulation tasks, we find that our algorithm significantly outperforms previous offline model-free RL methods as well as state-of-the-art online visual model-based RL methods. Moreover, we also find that our approach excels on an image-based drawer closing task on a real robot using a pre-existing dataset. All results including videos can be found online at https://sites.google.com/view/lompo/ .

Rafael Rafailov, Tianhe Yu, Aravind Rajeswaran, Chelsea Finn• 2020

Related benchmarks

TaskDatasetResultRank
Offline Reinforcement LearningMeta-World medium-replay
BP->DC*2.88e+3
10
Continuous Control (Offline RL)DMC Walker Walk → Walker Nofoot v1 (offline)
Mean Reward460
8
Continuous Control (Offline RL)DMC Walker Walk → Walker Downhill v1 (offline)
Mean Reward462
8
Continuous Control (Offline RL)DMC Walker Walk → Walker Uphill v1 (offline)
Mean Reward260
8
Continuous Control (Offline RL)DMC Cheetah Run → Cheetah Downhill v1 (offline)
Mean Reward395
8
Continuous Control (Offline RL)DMC Cheetah Run → Cheetah Nopaw v1 (offline)
Mean Reward120
8
Continuous Control (Offline RL)DMC Cheetah Run → Cheetah Uphill v1 (offline)
Mean Reward46
8
Visual Walker WalkWalker-walk DeepMind Control suite (medium-expert)
Normalized Score78.9
5
Visual Sawyer Door OpeningSawyer-door medium-expert
Success Rate100
5
Visual Sawyer Door OpeningSawyer-door (expert)
Success Rate0.00e+0
5
Showing 10 of 14 rows

Other info

Follow for update