Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SimVLA: A Simple VLA Baseline for Robotic Manipulation

About

Vision-Language-Action (VLA) models have emerged as a promising paradigm for general-purpose robotic manipulation, leveraging large-scale pre-training to achieve strong performance. The field has rapidly evolved with additional spatial priors and diverse architectural innovations. However, these advancements are often accompanied by varying training recipes and implementation details, which can make it challenging to disentangle the precise source of empirical gains. In this work, we introduce SimVLA, a streamlined baseline designed to establish a transparent reference point for VLA research. By strictly decoupling perception from control, using a standard vision-language backbone and a lightweight action head, and standardizing critical training dynamics, we demonstrate that a minimal design can achieve state-of-the-art performance. Despite having only 0.5B parameters, SimVLA outperforms multi-billion-parameter models on standard simulation benchmarks without robot pretraining. SimVLA also reaches on-par real-robot performance compared to pi0.5. Our results establish SimVLA as a robust, reproducible baseline that enables clear attribution of empirical gains to future architectural innovations. Website: https://frontierrobo.github.io/SimVLA

Yuankai Luo, Woping Chen, Tong Liang, Baiqiao Wang, Zhenguo Li• 2026

Related benchmarks

TaskDatasetResultRank
Robot ManipulationLIBERO
Goal Achievement98.6
494
Robotic ManipulationWidowX
Spoon Success Rate100
17
Robotic ManipulationGoogle Robot Variant Aggregation
Pick Success Rate87.4
15
Robotic ManipulationLIBERO-PRO Spatial
Success Rate (Ori)99
3
Robotic ManipulationLIBERO-PRO Object
Success Rate (Ori)100
3
Robotic ManipulationLIBERO-PRO Goal
Success Rate (Ori)99
3
Robotic ManipulationLIBERO-PRO Long
Success Rate (Ori)96
3
Showing 7 of 7 rows

Other info

GitHub

Follow for update