Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ST-VLA: Enabling 4D-Aware Spatiotemporal Understanding for General Robot Manipulation

About

Robotic manipulation in open-world environments requires reasoning across semantics, geometry, and long-horizon action dynamics. Existing hierarchical Vision-Language-Action (VLA) frameworks typically use 2D representations to connect high-level reasoning with low-level control, but lack depth awareness and temporal consistency, limiting robustness in complex 3D scenes. We propose ST-VLA, a hierarchical VLA framework using a unified 3D-4D representation to bridge perception and action. ST-VLA converts 2D guidance into 3D trajectories and generates smooth spatial masks that capture 4D spatio-temporal context, providing a stable interface between semantic reasoning and continuous control. To enable effective learning of such representations, we introduce ST-Human, a large-scale human manipulation dataset with 14 tasks and 300k episodes, annotated with 2D, 3D, and 4D supervision via a semi-automated pipeline. Using ST-Human, we train ST-VLM, a spatio-temporal vision-language model that generates spatially grounded and temporally coherent 3D representations to guide policy execution. The smooth spatial masks focus on task-relevant geometry and stabilize latent representations, enabling online replanning and long-horizon reasoning. Experiments on RLBench and real-world manipulation tasks show that \method significantly outperforms state-of-the-art baselines, improving zero-shot success rates by 44.6% and 30.3%. These results demonstrate that offloading spatio-temporal reasoning to VLMs with unified 3D-4D representations substantially improves robustness and generalization for open-world robotic manipulation. Project website: https://oucx117.github.io/ST-VLA/.

You Wu, Zixuan Chen, Cunxu Ou, Wenxuan Wang, Wenbo Huang, Lin Cao, Yangtao Chen, Weichao Qiu, Xingyue Quan, Jieqi Shi, Jing Huo, Yang Gao• 2026

Related benchmarks

TaskDatasetResultRank
Spatial ReasoningCVBench--
15
Visual Trace GenerationVABench-V
RMSE70.65
13
2D GroundingRoboRefIt
Box-Hit88.15
7
2D TaskRoboRefIt
Accuracy88.15
7
2D TaskST-Human-Pointing
Accuracy96.5
7
3D TaskCVBench
Accuracy84.52
7
3D TaskSAT
Accuracy75.33
7
3D TaskST-Human-Spatial
Accuracy98
7
3D TaskST-Human-Depth
Accuracy46.67
7
4D TaskST-Human-Planning
Accuracy92
7
Showing 10 of 28 rows

Other info

Follow for update