Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Latent Action Pretraining from Videos

About

We introduce Latent Action Pretraining for general Action models (LAPA), an unsupervised method for pretraining Vision-Language-Action (VLA) models without ground-truth robot action labels. Existing Vision-Language-Action models require action labels typically collected by human teleoperators during pretraining, which significantly limits possible data sources and scale. In this work, we propose a method to learn from internet-scale videos that do not have robot action labels. We first train an action quantization model leveraging VQ-VAE-based objective to learn discrete latent actions between image frames, then pretrain a latent VLA model to predict these latent actions from observations and task descriptions, and finally finetune the VLA on small-scale robot manipulation data to map from latent to robot actions. Experimental results demonstrate that our method significantly outperforms existing techniques that train robot manipulation policies from large-scale videos. Furthermore, it outperforms the state-of-the-art VLA model trained with robotic action labels on real-world manipulation tasks that require language conditioning, generalization to unseen objects, and semantic generalization to unseen instructions. Training only on human manipulation videos also shows positive transfer, opening up the potential for leveraging web-scale data for robotics foundation model.

Seonghyeon Ye, Joel Jang, Byeongguk Jeon, Sejune Joo, Jianwei Yang, Baolin Peng, Ajay Mandlekar, Reuben Tan, Yu-Wei Chao, Bill Yuchen Lin, Lars Liden, Kimin Lee, Jianfeng Gao, Luke Zettlemoyer, Dieter Fox, Minjoon Seo• 2024

Related benchmarks

TaskDatasetResultRank
Robot ManipulationLIBERO
Goal Achievement58.8
494
Robot ManipulationLIBERO (test)
Average Success Rate64.3
142
Long-horizon robot manipulationCalvin ABCD→D
Task 1 Completion Rate84
96
Robot ManipulationSimplerEnv WidowX Robot tasks (test)
Success Rate (Spoon)70.8
79
Robotic ManipulationLIBERO v1 (test)
Config 10 Score55.4
27
General Robot ManipulationSimplerEnv
Average Success Rate57.3
23
Block-stackingVideo-CraftBench
Success Rate (Human)37.4
14
Video GenerationVideo-CraftBench
SSIM62.4
14
Sequential Paper FoldingVideo-CraftBench
Step 1 Success Rate49.5
14
Robot ManipulationReal-world Robot Manipulation Novel Object
Success Rate53.3
9
Showing 10 of 23 rows

Other info

Follow for update