Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

DyNaVLM: Zero-Shot Vision-Language Navigation System with Dynamic Viewpoints and Self-Refining Graph Memory

About

We present DyNaVLM, an end-to-end vision-language navigation framework using Vision-Language Models (VLM). In contrast to prior methods constrained by fixed angular or distance intervals, our system empowers agents to freely select navigation targets via visual-language reasoning. At its core lies a self-refining graph memory that 1) stores object locations as executable topological relations, 2) enables cross-robot memory sharing through distributed graph updates, and 3) enhances VLM's decision-making via retrieval augmentation. Operating without task-specific training or fine-tuning, DyNaVLM demonstrates high performance on GOAT and ObjectNav benchmarks. Real-world tests further validate its robustness and generalization. The system's three innovations: dynamic action space formulation, collaborative graph memory, and training-free deployment, establish a new paradigm for scalable embodied robot, bridging the gap between discrete VLN tasks and continuous real-world navigation.

Zihe Ji, Huangxuan Lin, Yue Gao• 2025

Related benchmarks

TaskDatasetResultRank
Multi-Modal Lifelong NavigationGOAT-Bench unseen (val)
SR25.5
22
Goal-conditioned navigationGOAT-Bench
SR25.5
12
Lifelong Multimodal Object NavigationGOAT-Bench unseen (val)
s-SR0.255
10
Subtask NavigationGOAT-Bench unseen (val)
SR25.5
9
Showing 4 of 4 rows

Other info

Follow for update