Learning to Act without Actions
About
Pre-training large models on vast amounts of web data has proven to be an effective approach for obtaining powerful, general models in domains such as language and vision. However, this paradigm has not yet taken hold in reinforcement learning. This is because videos, the most abundant form of embodied behavioral data on the web, lack the action labels required by existing methods for imitating behavior from demonstrations. We introduce Latent Action Policies (LAPO), a method for recovering latent action information, and thereby latent-action policies, world models, and inverse dynamics models, purely from videos. LAPO is the first method able to recover the structure of the true action space just from observed dynamics, even in challenging procedurally-generated environments. LAPO enables training latent-action policies that can be rapidly fine-tuned into expert-level policies, either offline using a small action-labeled dataset, or online with rewards. LAPO takes a first step towards pre-training powerful, generalist policies and world models on the vast amounts of videos readily available on the web.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Interaction-limited policy learning | Procgen full distribution easy mode | Bigfish Score36.3 | 8 | |
| Continuous Control | DMControl | Point Mass Easy885 | 7 | |
| Imitation Learning | LIBERO | Spatial MSE0.162 | 6 | |
| Reinforcement Learning | PROCGEN BIGFISH 1.0 (test) | Accuracy80.98 | 6 | |
| Reinforcement Learning | PROCGEN CHASER 1.0 (test) | Accuracy26.87 | 6 | |
| Reinforcement Learning | PROCGEN LEAPER 1.0 (test) | Accuracy40.09 | 6 | |
| Reinforcement Learning | PROCGEN HEIST 1.0 (test) | Accuracy (%)72.23 | 6 | |
| Robot Manipulation | Metaworld 50k | Mean Success Rate100 | 4 |