Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Thinking in Space: How Multimodal Large Language Models See, Remember, and Recall Spaces

About

Humans possess the visual-spatial intelligence to remember spaces from sequential visual observations. However, can Multimodal Large Language Models (MLLMs) trained on million-scale video datasets also ``think in space'' from videos? We present a novel video-based visual-spatial intelligence benchmark (VSI-Bench) of over 5,000 question-answer pairs, and find that MLLMs exhibit competitive - though subhuman - visual-spatial intelligence. We probe models to express how they think in space both linguistically and visually and find that while spatial reasoning capabilities remain the primary bottleneck for MLLMs to reach higher benchmark performance, local world models and spatial awareness do emerge within these models. Notably, prevailing linguistic reasoning techniques (e.g., chain-of-thought, self-consistency, tree-of-thoughts) fail to improve performance, whereas explicitly generating cognitive maps during question-answering enhances MLLMs' spatial distance ability.

Jihan Yang, Shusheng Yang, Anjali W. Gupta, Rilyn Han, Li Fei-Fei, Saining Xie• 2024

Related benchmarks

TaskDatasetResultRank
Spatial ReasoningVSI-Bench
Avg Score79.2
192
Spatial ReasoningMindCube
Accuracy94.5
69
3D Question AnsweringVSI-Bench
Average Score59.72
37
Visual Spatial InteractionVSI
Average Performance59.72
30
Spatial Question AnsweringOST-Bench 1,396 QA pairs
Average Score64
12
Spatial IntelligenceSpaCE-10
Accuracy91.2
7
Spatiotemporal IntelligenceVSTI-Bench
Accuracy77
6
Showing 7 of 7 rows

Other info

Follow for update