ST-VLA: Enabling 4D-Aware Spatiotemporal Understanding for General Robot Manipulation
About
Robotic manipulation in open-world environments requires reasoning across semantics, geometry, and long-horizon action dynamics. Existing hierarchical Vision-Language-Action (VLA) frameworks typically use 2D representations to connect high-level reasoning with low-level control, but lack depth awareness and temporal consistency, limiting robustness in complex 3D scenes. We propose ST-VLA, a hierarchical VLA framework using a unified 3D-4D representation to bridge perception and action. ST-VLA converts 2D guidance into 3D trajectories and generates smooth spatial masks that capture 4D spatio-temporal context, providing a stable interface between semantic reasoning and continuous control. To enable effective learning of such representations, we introduce ST-Human, a large-scale human manipulation dataset with 14 tasks and 300k episodes, annotated with 2D, 3D, and 4D supervision via a semi-automated pipeline. Using ST-Human, we train ST-VLM, a spatio-temporal vision-language model that generates spatially grounded and temporally coherent 3D representations to guide policy execution. The smooth spatial masks focus on task-relevant geometry and stabilize latent representations, enabling online replanning and long-horizon reasoning. Experiments on RLBench and real-world manipulation tasks show that \method significantly outperforms state-of-the-art baselines, improving zero-shot success rates by 44.6% and 30.3%. These results demonstrate that offloading spatio-temporal reasoning to VLMs with unified 3D-4D representations substantially improves robustness and generalization for open-world robotic manipulation. Project website: https://oucx117.github.io/ST-VLA/.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Spatial Reasoning | CVBench | -- | 15 | |
| Visual Trace Generation | VABench-V | RMSE70.65 | 13 | |
| 2D Grounding | RoboRefIt | Box-Hit88.15 | 7 | |
| 2D Task | RoboRefIt | Accuracy88.15 | 7 | |
| 2D Task | ST-Human-Pointing | Accuracy96.5 | 7 | |
| 3D Task | CVBench | Accuracy84.52 | 7 | |
| 3D Task | SAT | Accuracy75.33 | 7 | |
| 3D Task | ST-Human-Spatial | Accuracy98 | 7 | |
| 3D Task | ST-Human-Depth | Accuracy46.67 | 7 | |
| 4D Task | ST-Human-Planning | Accuracy92 | 7 |