Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

OpenVLA: An Open-Source Vision-Language-Action Model

About

Large policies pretrained on a combination of Internet-scale vision-language data and diverse robot demonstrations have the potential to change how we teach robots new skills: rather than training new behaviors from scratch, we can fine-tune such vision-language-action (VLA) models to obtain robust, generalizable policies for visuomotor control. Yet, widespread adoption of VLAs for robotics has been challenging as 1) existing VLAs are largely closed and inaccessible to the public, and 2) prior work fails to explore methods for efficiently fine-tuning VLAs for new tasks, a key component for adoption. Addressing these challenges, we introduce OpenVLA, a 7B-parameter open-source VLA trained on a diverse collection of 970k real-world robot demonstrations. OpenVLA builds on a Llama 2 language model combined with a visual encoder that fuses pretrained features from DINOv2 and SigLIP. As a product of the added data diversity and new model components, OpenVLA demonstrates strong results for generalist manipulation, outperforming closed models such as RT-2-X (55B) by 16.5% in absolute task success rate across 29 tasks and multiple robot embodiments, with 7x fewer parameters. We further show that we can effectively fine-tune OpenVLA for new settings, with especially strong generalization results in multi-task environments involving multiple objects and strong language grounding abilities, and outperform expressive from-scratch imitation learning methods such as Diffusion Policy by 20.4%. We also explore compute efficiency; as a separate contribution, we show that OpenVLA can be fine-tuned on consumer GPUs via modern low-rank adaptation methods and served efficiently via quantization without a hit to downstream success rate. Finally, we release model checkpoints, fine-tuning notebooks, and our PyTorch codebase with built-in support for training VLAs at scale on Open X-Embodiment datasets.

Moo Jin Kim, Karl Pertsch, Siddharth Karamcheti, Ted Xiao, Ashwin Balakrishna, Suraj Nair, Rafael Rafailov, Ethan Foster, Grace Lam, Pannag Sanketi, Quan Vuong, Thomas Kollar, Benjamin Burchfiel, Russ Tedrake, Dorsa Sadigh, Sergey Levine, Percy Liang, Chelsea Finn• 2024

Related benchmarks

TaskDatasetResultRank
Robot ManipulationLIBERO
Goal Achievement97.9
494
Robot ManipulationLIBERO (test)
Average Success Rate76.5
142
Long-horizon robot manipulationCalvin ABCD→D
Task 1 Completion Rate91.3
96
Robot ManipulationSimplerEnv WidowX Robot tasks (test)
Success Rate (Spoon)4.2
79
Long-horizon task completionCalvin ABC->D
Success Rate (1)91.3
67
Robot ManipulationSimplerEnv Google Robot tasks Visual Matching
Pick Coke Can Success Rate18
62
Robot ManipulationSimplerEnv Google Robot tasks Variant Aggregation
Pick Coke Can Success Rate60.8
44
Robotic ManipulationLIBERO-Plus
Camera Robustness Score80
34
Robotic ManipulationLIBERO 1.0 (test)
Long53.7
30
Instruction-following robotic manipulationCALVIN ABC→D (unseen environment D)
Success Rate (Length 1)91.3
29
Showing 10 of 195 rows
...

Other info

Code

Follow for update