Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

On the Generalization Capacities of MLLMs for Spatial Intelligence

About

Multimodal Large Language Models (MLLMs) that directly process RGB inputs for tasks like 3D localization and navigation have shown remarkable potential. However, we argue that these RGB-only approaches are fundamentally flawed in their ability to generalize across cameras. By ignoring camera parameters, they entangle an object's physical properties with the camera's perspective, creating an irresolvable ambiguity. We show this leads MLLMs to overfit to the training camera distribution, rather than learning true and generalizable 3D geometric principles. To address this, we propose Camera-Aware MLLM framework for spatial MLLMs. It learns generalizable spatial reasoning by: (i) injecting camera intrinsics via a dense embedding that conditions each visual token; (ii) introducing a camera-aware data augmentation strategy that synthetically varies camera parameters, forcing the model to disentangle camera properties from scene content; and (iii) distilling geometric priors from a 3D vision foundation model. Extensive experiments demonstrate that camera-aware MLLMs substantially outperform their naive counterparts, particularly in cross-camera generalization tests on spatially-grounded tasks, indicating that camera-awareness is not only beneficial but also a prerequisite for robust and generalizable spatial intelligence in MLLMs.

Gongjie Zhang, Wenhao Li, Quanhao Qian, Jiuniu Wang, Deli Zhao, Shijian Lu, Ran Xu• 2026

Related benchmarks

TaskDatasetResultRank
Spatial ReasoningVSI-Bench
Avg Score46.8
192
Spatial ReasoningSPAR-Bench full
Average Score68.35
23
Spatial ReasoningSPAR-Bench tiny--
12
Spatial UnderstandingCV-Bench 3D (test)
Average Score90.7
11
Spatial UnderstandingBLINK Spatial (test)
Avg Score77
10
Showing 5 of 5 rows

Other info

Follow for update