Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Loc3R-VLM: Language-based Localization and 3D Reasoning with Vision-Language Models

About

Multimodal Large Language Models (MLLMs) have made impressive progress in connecting vision and language, but they still struggle with spatial understanding and viewpoint-aware reasoning. Recent efforts aim to augment the input representations with geometric cues rather than explicitly teaching models to reason in 3D space. We introduce Loc3R-VLM, a framework that equips 2D Vision-Language Models with advanced 3D understanding capabilities from monocular video input. Inspired by human spatial cognition, Loc3R-VLM relies on two joint objectives: global layout reconstruction to build a holistic representation of the scene structure, and explicit situation modeling to anchor egocentric perspective. These objectives provide direct spatial supervision that grounds both perception and language in a 3D context. To ensure geometric consistency and metric-scale alignment, we leverage lightweight camera pose priors extracted from a pre-trained 3D foundation model. Loc3R-VLM achieves state-of-the-art performance in language-based localization and outperforms existing 2D- and video-based approaches on situated and general 3D question-answering benchmarks, demonstrating that our spatial supervision framework enables strong 3D understanding. Project page: https://kevinqu7.github.io/loc3r-vlm

Kevin Qu, Haozhe Qi, Mihai Dusmanu, Mahdi Rad, Rui Wang, Marc Pollefeys• 2026

Related benchmarks

TaskDatasetResultRank
3D Question AnsweringScanQA (val)
METEOR19.5
217
3D Question AnsweringSQA3D (test)
EM@162.8
98
3D Question AnsweringVSI-Bench
Average Score63.2
37
3D Question AnsweringMSQA
Count Accuracy33.1
25
Language-based LocalizationSQA3D (test)
Accuracy @ 0.5m42.6
8
3D Question AnsweringBeacon3D ScanNet (test)
Class Accuracy44.8
7
Showing 6 of 6 rows

Other info

Follow for update