Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning Latent Plans from Play

About

Acquiring a diverse repertoire of general-purpose skills remains an open challenge for robotics. In this work, we propose self-supervising control on top of human teleoperated play data as a way to scale up skill learning. Play has two properties that make it attractive compared to conventional task demonstrations. Play is cheap, as it can be collected in large quantities quickly without task segmenting, labeling, or resetting to an initial state. Play is naturally rich, covering ~4x more interaction space than task demonstrations for the same amount of collection time. To learn control from play, we introduce Play-LMP, a self-supervised method that learns to organize play behaviors in a latent space, then reuse them at test time to achieve specific goals. Combining self-supervised control with a diverse play dataset shifts the focus of skill learning from a narrow and discrete set of tasks to the full continuum of behaviors available in an environment. We find that this combination generalizes well empirically---after self-supervising on unlabeled play, our method substantially outperforms individual expert-trained policies on 18 difficult user-specified visual manipulation tasks in a simulated robotic tabletop environment. We additionally find that play-supervised models, unlike their expert-trained counterparts, are more robust to perturbations and exhibit retrying-till-success behaviors. Finally, we find that our agent organizes its latent plan space around functional tasks, despite never being trained with task labels. Videos, code and data are available at learning-from-play.github.io

Corey Lynch, Mohi Khansari, Ted Xiao, Vikash Kumar, Jonathan Tompson, Sergey Levine, Pierre Sermanet• 2019

Related benchmarks

TaskDatasetResultRank
Robotic PlanningOGBench PointMaze Giant 48 (stitch)
Success Rate0.00e+0
8
Robotic PlanningOGBench AntMaze Giant 48 (stitch)
Success Rate0.00e+0
8
Robotic PlanningOGBench Scene 48 (play)
Success Rate0.05
8
Goal-conditioned manipulationOGBench cube-single-play v0
Task 1 Success Rate0.7
7
Multi-task Robotic ManipulationCALVIN (test)
Success Rate (1 task)61.4
4
Showing 5 of 5 rows

Other info

Follow for update