Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Scaling Spatial Intelligence with Multimodal Foundation Models

About

Despite remarkable progress, multimodal foundation models still exhibit surprising deficiencies in spatial intelligence. In this work, we explore scaling up multimodal foundation models to cultivate spatial intelligence within the SenseNova-SI family, built upon established multimodal foundations including visual understanding models (i.e., Qwen3-VL and InternVL3) and unified understanding and generation models (i.e., Bagel). We take a principled approach to constructing high-performing and robust spatial intelligence by systematically curating SenseNova-SI-8M: eight million diverse data samples under a rigorous taxonomy of spatial capabilities. SenseNova-SI demonstrates unprecedented performance across a broad range of spatial intelligence benchmarks: 68.8% on VSI-Bench, 43.3% on MMSI, 85.7% on MindCube, 54.7% on ViewSpatial, 47.7% on SITE, 63.9% on BLINK, 55.5% on 3DSR, and 72.0% on EmbSpatial, while maintaining strong general multimodal understanding (e.g., 84.9% on MMBench-En). More importantly, we analyze the impact of data scaling, discuss early signs of emergent generalization capabilities enabled by diverse data training, analyze the risk of overfitting and language shortcuts, present a preliminary study on spatial chain-of-thought reasoning, and validate the potential downstream application. All newly trained multimodal foundation models are publicly released.

Zhongang Cai, Ruisi Wang, Chenyang Gu, Fanyi Pu, Junxiang Xu, Yubo Wang, Wanqi Yin, Zhitao Yang, Chen Wei, Qingping Sun, Tongxi Zhou, Jiaqi Li, Hui En Pang, Oscar Qian, Yukun Wei, Zhiqian Lin, Xuanke Shi, Kewang Deng, Xiaoyang Han, Zukai Chen, Xiangyu Fan, Hanming Deng, Lewei Lu, Liang Pan, Bo Li, Ziwei Liu, Quan Wang, Dahua Lin, Lei Yang• 2025

Related benchmarks

TaskDatasetResultRank
Multimodal UnderstandingMMStar
Accuracy67.8
324
Diagram UnderstandingAI2D
Accuracy88.8
247
Optical Character RecognitionOCRBench
Score863
232
Spatial ReasoningVSI-Bench
Avg Score68.7
192
Document Visual Question AnsweringDocVQA
Accuracy95.4
132
Spatial ReasoningViewspatial
Accuracy54.6
92
Visual PerceptionMMVP
Accuracy65.3
82
Spatial ReasoningMindCube
Accuracy85.6
69
Multi-modal UnderstandingMMBench EN
Accuracy84.9
64
Multi-modal Video UnderstandingVideoMME
Accuracy76.4
50
Showing 10 of 18 rows

Other info

GitHub

Follow for update