Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SpatialBot: Precise Spatial Understanding with Vision Language Models

About

Vision Language Models (VLMs) have achieved impressive performance in 2D image understanding, however they are still struggling with spatial understanding which is the foundation of Embodied AI. In this paper, we propose SpatialBot for better spatial understanding by feeding both RGB and depth images. Additionally, we have constructed the SpatialQA dataset, which involves multi-level depth-related questions to train VLMs for depth understanding. Finally, we present SpatialBench to comprehensively evaluate VLMs' capabilities in spatial understanding at different levels. Extensive experiments on our spatial-understanding benchmark, general VLM benchmarks and Embodied AI tasks, demonstrate the remarkable improvements of SpatialBot trained on SpatialQA. The model, code and data are available at https://github.com/BAAI-DCAI/SpatialBot.

Wenxiao Cai, Iaroslav Ponomarenko, Jianhao Yuan, Xiaoqi Li, Wankou Yang, Hao Dong, Bo Zhao• 2024

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE--
1455
Visual Question AnsweringVQA v2
Accuracy80.94
1362
Visual Question AnsweringGQA--
1249
Multimodal UnderstandingSEED-Bench Image--
121
Spatial ReasoningEmbSpatial
Overall Accuracy50.66
63
Spatial ReasoningOmniSpatial (test)
Dyn. Score40.7
53
Egocentric Spatial ReasoningCOCOSPATIAL
Left/Right Accuracy84.5
19
Perspective-aware spatial reasoningCOMFORT Visual Illusions
Directional Accuracy (Left/Right)52.25
19
Allocentric Spatial Reasoning3DSRBench
Left/Right Acc39.54
19
Allocentric Spatial ReasoningCOMFORT#
Left/Right Accuracy46.33
19
Showing 10 of 28 rows

Other info

Code

Follow for update