Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

OmniSpatial: Towards Comprehensive Spatial Reasoning Benchmark for Vision Language Models

About

Spatial reasoning is a key aspect of cognitive psychology and remains a bottleneck for current vision-language models (VLMs). While extensive research has aimed to evaluate or improve VLMs' understanding of basic spatial relations, such as distinguishing left from right, near from far, and object counting, these tasks cover only the most elementary layer of spatial reasoning and are largely approaching saturation in the latest reasoning models. In this work, we introduce OmniSpatial, a comprehensive and challenging benchmark for spatial reasoning, grounded in cognitive psychology. OmniSpatial covers four major categories: dynamic reasoning, complex spatial logic, spatial interaction, and perspective-taking, with 50 fine-grained subcategories. Through careful manual annotation, we construct over 8.4K question-answer pairs. Extensive experiments show that both open- and closed-source VLMs exhibit significant limitations in comprehensive spatial reasoning. We also explore two strategies-PointGraph (explicit scene graph cues) and SpatialCoT (novel-view chain-of-thought)-to bolster spatial reasoning.

Mengdi Jia, Zekun Qi, Shaochen Zhang, Wenyao Zhang, Xinqiang Yu, Jiawei He, He Wang, Li Yi• 2025

Related benchmarks

TaskDatasetResultRank
Spatial ReasoningViewspatial
Accuracy34.8
92
Spatial ReasoningOmniSpatial (test)--
53
Spatial ReasoningCV-Bench 2D
Accuracy68.8
22
Block CountingOrthoMind-3D
Block Count Score10.6
20
Object ReasoningOrthoMind-3D
Object Count55
20
Spatial ReasoningOmniSpatial--
15
Visual Question AnsweringCLEVR
Accuracy96.9
10
Spatial ReasoningOrthoMind-3D (OOD)
Block Count Accuracy21.2
10
Spatial ReasoningSPBench SI
Accuracy29.7
9
Spatial ReasoningOmniSpatial 8.4K (test)--
8
Showing 10 of 10 rows

Other info

Follow for update