Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SmolVLA: A Vision-Language-Action Model for Affordable and Efficient Robotics

About

Vision-language models (VLMs) pretrained on large-scale multimodal datasets encode rich visual and linguistic knowledge, making them a strong foundation for robotics. Rather than training robotic policies from scratch, recent approaches adapt VLMs into vision-language-action (VLA) models that enable natural language-driven perception and control. However, existing VLAs are typically massive--often with billions of parameters--leading to high training costs and limited real-world deployability. Moreover, they rely on academic and industrial datasets, overlooking the growing availability of community-collected data from affordable robotic platforms. In this work, we present SmolVLA, a small, efficient, and community-driven VLA that drastically reduces both training and inference costs, while retaining competitive performance. SmolVLA is designed to be trained on a single GPU and deployed on consumer-grade GPUs or even CPUs. To further improve responsiveness, we introduce an asynchronous inference stack decoupling perception and action prediction from action execution, allowing higher control rates with chunked action generation. Despite its compact size, SmolVLA achieves performance comparable to VLAs that are 10x larger. We evaluate SmolVLA on a range of both simulated as well as real-world robotic benchmarks and release all code, pretrained models, and training data.

Mustafa Shukor, Dana Aubakirova, Francesco Capuano, Pepijn Kooijmans, Steven Palma, Adil Zouitine, Michel Aractingi, Caroline Pascal, Martino Russi, Andres Marafioti, Simon Alibert, Matthieu Cord, Thomas Wolf, Remi Cadene• 2025

Related benchmarks

TaskDatasetResultRank
Robot ManipulationLIBERO
Goal Achievement91
494
Robot ManipulationLIBERO (test)
Average Success Rate88.8
142
Robot Policy LearningLIBERO
S (Spatial) Rate93
16
Robotic ManipulationManiSkill3
Stack Cube Success Rate12.7
15
Robotic ManipulationLIBERO Franka Panda 120 Tasks
Spatial Achievement Rate93
9
Dynamic Object ManipulationDOM Simulation 1.0 (test)
Reactivity (CR)1.85e+3
9
Hand-Object InteractionDynaHOI Online Evaluation
S-Loc (%)10
7
Robot ManipulationMetaWorld Sawyer 50 tasks
Success Rate (Easy)87.1
6
Grasp-and-placeGrasp-Easy
Average Completion Progress82.5
5
Language-guided robot manipulationLIBERO-Goal 5-shot (test)
Success Rate (SR)0.52
5
Showing 10 of 16 rows

Other info

Follow for update