Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Move to Understand a 3D Scene: Bridging Visual Grounding and Exploration for Efficient and Versatile Embodied Navigation

About

Embodied scene understanding requires not only comprehending visual-spatial information that has been observed but also determining where to explore next in the 3D physical world. Existing 3D Vision-Language (3D-VL) models primarily focus on grounding objects in static observations from 3D reconstruction, such as meshes and point clouds, but lack the ability to actively perceive and explore their environment. To address this limitation, we introduce \underline{\textbf{M}}ove \underline{\textbf{t}}o \underline{\textbf{U}}nderstand (\textbf{\model}), a unified framework that integrates active perception with \underline{\textbf{3D}} vision-language learning, enabling embodied agents to effectively explore and understand their environment. This is achieved by three key innovations: 1) Online query-based representation learning, enabling direct spatial memory construction from RGB-D frames, eliminating the need for explicit 3D reconstruction. 2) A unified objective for grounding and exploring, which represents unexplored locations as frontier queries and jointly optimizes object grounding and frontier selection. 3) End-to-end trajectory learning that combines \textbf{V}ision-\textbf{L}anguage-\textbf{E}xploration pre-training over a million diverse trajectories collected from both simulated and real-world RGB-D sequences. Extensive evaluations across various embodied navigation and question-answering benchmarks show that MTU3D outperforms state-of-the-art reinforcement learning and modular navigation approaches by 14\%, 23\%, 9\%, and 2\% in success rate on HM3D-OVON, GOAT-Bench, SG3D, and A-EQA, respectively. \model's versatility enables navigation using diverse input modalities, including categories, language descriptions, and reference images. These findings highlight the importance of bridging visual grounding and exploration for embodied intelligence.

Ziyu Zhu, Xilin Wang, Yixuan Li, Zhuofan Zhang, Xiaojian Ma, Yixin Chen, Baoxiong Jia, Wei Liang, Qian Yu, Zhidong Deng, Siyuan Huang, Qing Li• 2025

Related benchmarks

TaskDatasetResultRank
Object Goal NavigationHM3D-OVON Seen (val)
SR55
44
Object Goal NavigationHM3D-OVON unseen (val)
Success Rate40.8
43
Object Goal NavigationHM3D-OVON Seen-Synonyms (val)
SR45
35
Open-set ObjectGoal NavigationHM3D-OVON unseen (val)
SR40.8
28
Multi-Modal Lifelong NavigationGOAT-Bench unseen (val)
SR47.2
22
Open-Vocabulary Object Goal NavigationHM3D-OVON (val-seen)
SR55
21
Open-Vocabulary Object Goal NavigationHM3D-OVON seen-syn (val)
SR45
21
Object NavigationCoIN-Bench Seen Synonyms (val)
SPL14.7
13
Object NavigationOVON unseen (val)
SR40.8
12
Object Goal NavigationHM3D OVON
SR40.8
11
Showing 10 of 19 rows

Other info

Follow for update