Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

3DRS: MLLMs Need 3D-Aware Representation Supervision for Scene Understanding

About

Recent advances in scene understanding have leveraged multimodal large language models (MLLMs) for 3D reasoning by capitalizing on their strong 2D pretraining. However, the lack of explicit 3D data during MLLM pretraining limits 3D representation capability. In this paper, we investigate the 3D-awareness of MLLMs by evaluating multi-view correspondence and reveal a strong positive correlation between the quality of 3D-aware representation and downstream task performance. Motivated by this, we propose 3DRS, a framework that enhances MLLM 3D representation learning by introducing supervision from pretrained 3D foundation models. Our approach aligns MLLM visual features with rich 3D knowledge distilled from 3D models, effectively improving scene understanding. Extensive experiments across multiple benchmarks and MLLMs -- including visual grounding, captioning, and question answering -- demonstrate consistent performance gains. Project page: https://visual-ai.github.io/3drs

Xiaohu Huang, Jingjing Wu, Qunyi Xie, Kai Han• 2025

Related benchmarks

TaskDatasetResultRank
3D Question AnsweringScanQA (val)
METEOR20.5
217
3D Visual GroundingScanRefer (val)
Overall Accuracy @ IoU 0.5050.83
192
Spatial ReasoningVSI-Bench
Avg Score45.9
192
3D Visual GroundingScanRefer
Acc@0.556.1
142
3D Question AnsweringSQA3D (test)
EM@160.6
98
3D Dense CaptioningScan2Cap
CIDEr @0.586.1
96
3D Question AnsweringSQA3D
EM60.6
69
Visual Spatial IntelligenceVSI-Bench
Average Score45.9
48
3D Visual GroundingScanRefer Overall
Acc @ 0.2562.9
41
3D Visual GroundingScanRefer Unique
Acc@0.25 (IoU=0.25)87.4
41
Showing 10 of 16 rows

Other info

Follow for update