Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

D3D-VLP: Dynamic 3D Vision-Language-Planning Model for Embodied Grounding and Navigation

About

Embodied agents face a critical dilemma that end-to-end models lack interpretability and explicit 3D reasoning, while modular systems ignore cross-component interdependencies and synergies. To bridge this gap, we propose the Dynamic 3D Vision-Language-Planning Model (D3D-VLP). Our model introduces two key innovations: 1) A Dynamic 3D Chain-of-Thought (3D CoT) that unifies planning, grounding, navigation, and question answering within a single 3D-VLM and CoT pipeline; 2) A Synergistic Learning from Fragmented Supervision (SLFS) strategy, which uses a masked autoregressive loss to learn from massive and partially-annotated hybrid data. This allows different CoT components to mutually reinforce and implicitly supervise each other. To this end, we construct a large-scale dataset with 10M hybrid samples from 5K real scans and 20K synthetic scenes that are compatible with online learning methods such as RL and DAgger. Our D3D-VLP achieves state-of-the-art results on multiple benchmarks, including Vision-and-Language Navigation (R2R-CE, REVERIE-CE, NavRAG-CE), Object-goal Navigation (HM3D-OVON), and Task-oriented Sequential Grounding and Navigation (SG3D). Real-world mobile manipulation experiments further validate the effectiveness.

Zihan Wang, Seungjun Lee, Guangzhao Dai, Gim Hee Lee• 2025

Related benchmarks

TaskDatasetResultRank
Vision-and-Language NavigationREVERIE (val unseen)
SPL34.7
129
Embodied NavigationR2R-CE
Navigation Error (NE)4.73
19
Object Goal NavigationHM3D OVON
SR47.3
11
Embodied NavigationNavRAG-CE
Navigation Error (NE)7.57
5
Sequential NavigationSG3D-Nav
s-SR33.7
5
Sequential GroundingSG3D-Nav
s-ACC28.3
2
Showing 6 of 6 rows

Other info

Follow for update