Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SpatialStack: Layered Geometry-Language Fusion for 3D VLM Spatial Reasoning

About

Large vision-language models (VLMs) still struggle with reliable 3D spatial reasoning, a core capability for embodied and physical AI systems. This limitation arises from their inability to capture fine-grained 3D geometry and spatial relationships. While recent efforts have introduced multi-view geometry transformers into VLMs, they typically fuse only the deep-layer features from vision and geometry encoders, discarding rich hierarchical signals and creating a fundamental bottleneck for spatial understanding. To overcome this, we propose SpatialStack, a general hierarchical fusion framework that progressively aligns vision, geometry, and language representations across the model hierarchy. Moving beyond conventional late-stage vision-geometry fusion, SpatialStack stacks and synchronizes multi-level geometric features with the language backbone, enabling the model to capture both local geometric precision and global contextual semantics. Building upon this framework, we develop VLM-SpatialStack, a model that achieves state-of-the-art performance on multiple 3D spatial reasoning benchmarks. Extensive experiments and ablations demonstrate that our multi-level fusion strategy consistently enhances 3D understanding and generalizes robustly across diverse spatial reasoning tasks, establishing SpatialStack as an effective and extensible design paradigm for vision-language-geometry integration in next-generation multimodal physical AI systems.

Jiang Zhang, Shijie Zhou, Bangya Liu, Achuta Kadambi, Zhiwen Fan• 2026

Related benchmarks

TaskDatasetResultRank
Spatial ReasoningVSI-Bench
Avg Score67.5
192
Spatial PerceptionCV-Bench-3D
Accuracy92.2
12
Spatial PerceptionCV-Bench Average
Accuracy85.5
12
Spatial PerceptionCV-Bench 2D
Accuracy (%)78.9
12
Spatial ReasoningCV-Bench
Count69
3
Showing 5 of 5 rows

Other info

Follow for update