Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

VIEW2SPACE: Studying Multi-View Visual Reasoning from Sparse Observations

About

Multi-view visual reasoning is essential for intelligent systems that must understand complex environments from sparse and discrete viewpoints, yet existing research has largely focused on single-image or temporally dense video settings. In real-world scenarios, reasoning across views requires integrating partial observations without explicit guidance, while collecting large-scale multi-view data with accurate geometric and semantic annotations remains challenging. To address this gap, we leverage physically grounded simulation to construct diverse, high-fidelity 3D scenes with precise per-view metadata, enabling scalable data generation that remains transferable to real-world settings. Based on this engine, we introduce VIEW2SPACE, a multi-dimensional benchmark for sparse multi-view reasoning, together with a scalable, disjoint training split supporting millions of grounded question-answer pairs. Using this benchmark, a comprehensive evaluation of state-of-the-art vision-language and spatial models reveals that multi-view reasoning remains largely unsolved, with most models performing only marginally above random guessing. We further investigate whether training can bridge this gap. Our proposed Grounded Chain-of-Thought with Visual Evidence substantially improves performance under moderate difficulty, and generalizes to real-world data, outperforming existing approaches in cross-dataset evaluation. We further conduct difficulty-aware scaling analyses across model size, data scale, reasoning depth, and visibility constraints, indicating that while geometric perception can benefit from scaling under sufficient visibility, deep compositional reasoning across sparse views remains a fundamental challenge.

Fucai Ke, Zhixi Cai, Boying Li, Long Chen, Beibei Lin, Weiqing Wang, Pari Delir Haghighi, Gholamreza Haffari, Hamid Rezatofighi• 2026

Related benchmarks

TaskDatasetResultRank
Multiple Choice AnsweringVIEW2SPACE v1
Accuracy64.93
27
Visual CountingVIEW2SPACE v1
MAE0.58
27
Visual GroundingVIEW2SPACE v1
mIoU69.34
27
Multi-view spatial reasoningMindCube (tiny)
Overall Accuracy70
24
Multiple Choice AnsweringVIEW2SPACE
Accuracy (%)64.93
8
Visual CountingVIEW2SPACE
Accuracy54.99
8
Visual GroundingVIEW2SPACE
mIoU69.34
8
Showing 7 of 7 rows

Other info

Follow for update