Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SpatialVLA: Exploring Spatial Representations for Visual-Language-Action Model

About

In this paper, we claim that spatial understanding is the keypoint in robot manipulation, and propose SpatialVLA to explore effective spatial representations for the robot foundation model. Specifically, we introduce Ego3D Position Encoding to inject 3D information into the input observations of the visual-language-action model, and propose Adaptive Action Grids to represent spatial robot movement actions with adaptive discretized action grids, facilitating learning generalizable and transferrable spatial action knowledge for cross-robot control. SpatialVLA is first pre-trained on top of a vision-language model with 1.1 Million real-world robot episodes, to learn a generalist manipulation policy across multiple robot environments and tasks. After pre-training, SpatialVLA is directly applied to perform numerous tasks in a zero-shot manner. The superior results in both simulation and real-world robots demonstrate its advantage of inferring complex robot motion trajectories and its strong in-domain multi-task generalization ability. We further show the proposed Adaptive Action Grids offer a new and effective way to fine-tune the pre-trained SpatialVLA model for new simulation and real-world setups, where the pre-learned action grids are re-discretized to capture robot-specific spatial action movements of new setups. The superior results from extensive evaluations demonstrate the exceptional in-distribution generalization and out-of-distribution adaptation capability, highlighting the crucial benefit of the proposed spatial-aware representations for generalist robot policy learning. All the details and codes will be open-sourced.

Delin Qu, Haoming Song, Qizhi Chen, Yuanqi Yao, Xinyi Ye, Yan Ding, Zhigang Wang, JiaYuan Gu, Bin Zhao, Dong Wang, Xuelong Li• 2025

Related benchmarks

TaskDatasetResultRank
Robot ManipulationLIBERO
Goal Achievement93.7
494
Robot ManipulationLIBERO (test)
Average Success Rate78.1
142
Robot ManipulationSimplerEnv WidowX Robot tasks (test)
Success Rate (Spoon)20.8
79
Robot ManipulationSimplerEnv Google Robot tasks Visual Matching
Pick Coke Can Success Rate86
62
Robot ManipulationSimplerEnv Google Robot tasks Variant Aggregation
Pick Coke Can Success Rate89.5
44
Robotic ManipulationLIBERO 1.0 (test)
Long55.5
30
Move NearSimplerEnv Google Robot embodiment
Success Rate77.9
28
Pick CanSimplerEnv Google Robot embodiment
Success Rate88
28
Drawer OpeningSimplerEnv Google Robot embodiment (test)
Success Rate57.4
28
Robot ManipulationSimplerEnv Google Robot Visual Matching
Pick Coke Can86
28
Showing 10 of 62 rows

Other info

Code

Follow for update