Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

RoboStream: Weaving Spatio-Temporal Reasoning with Memory in Vision-Language Models for Robotics

About

Enabling reliable long-horizon robotic manipulation is a crucial step toward open-world embodied intelligence. However, VLM-based planners treat each step as an isolated observation-to-action mapping, forcing them to reinfer scene geometry from raw pixels at every decision point while remaining unaware of how prior actions have reshaped the environment. Despite strong short-horizon performance, these systems lack the spatio-temporal reasoning required for persistent geometric anchoring and memory of action-triggered state transitions. Without persistent state tracking, perceptual errors accumulate across the execution horizon, temporarily occluded objects are catastrophically forgotten, and these compounding failures lead to precondition violations that cascade through subsequent steps. In contrast, humans maintain a persistent mental model that continuously tracks spatial relations and action consequences across interactions rather than reconstructing them at each instant. Inspired by this human capacity for causal spatio-temporal reasoning with persistent memory, we propose RoboStream, a training-free framework that achieves geometric anchoring through Spatio-Temporal Fusion Tokens (STF-Tokens), which bind visual evidence to 3D geometric attributes for persistent object grounding, and maintains causal continuity via a Causal Spatio-Temporal Graph (CSTG) that records action-triggered state transitions across steps. This design enables the planner to trace causal chains and preserve object permanence under occlusion without additional training or fine-tuning. RoboStream achieves 90.5% on long-horizon RLBench and 44.4% on challenging real-world block-building tasks, where both SoFar and VoxPoser score 11.1%, demonstrating that spatio-temporal reasoning and causal memory are critical missing components for reliable long-horizon manipulation.

Yuzhi Huang, Jie Wu, Weijue Bu, Ziyi Xiong, Gaoyang Jiang, Ye Li, Kangye Ji, Shuzhao Xie, Yue Huang, Chenglei Wu, Jingyan Jiang, Zhi Wang• 2026

Related benchmarks

TaskDatasetResultRank
Move NearSIMPLER Google Robot Setup
Success Rate95.8
12
Pick CokeSIMPLER Google Robot Setup
Success Rate95.7
12
Robot ManipulationSIMPLER WidowX + Bridge Setup
Spoon Success Rate62.5
10
6-DoF Object RearrangementOpen6DOR V2 (test)
Position Accuracy93.8
8
Spatial Reasoning6-DoF SpatialBench 41
Relative Position Accuracy66
8
Long-horizon manipulationRLBench
Bridge Between Towers Success Rate88
5
Short-horizon robotic manipulationRLBench
Close Drawer92
3
Showing 7 of 7 rows

Other info

Follow for update