Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

NaVid: Video-based VLM Plans the Next Step for Vision-and-Language Navigation

About

Vision-and-language navigation (VLN) stands as a key research problem of Embodied AI, aiming at enabling agents to navigate in unseen environments following linguistic instructions. In this field, generalization is a long-standing challenge, either to out-of-distribution scenes or from Sim to Real. In this paper, we propose NaVid, a video-based large vision language model (VLM), to mitigate such a generalization gap. NaVid makes the first endeavor to showcase the capability of VLMs to achieve state-of-the-art level navigation performance without any maps, odometers, or depth inputs. Following human instruction, NaVid only requires an on-the-fly video stream from a monocular RGB camera equipped on the robot to output the next-step action. Our formulation mimics how humans navigate and naturally gets rid of the problems introduced by odometer noises, and the Sim2Real gaps from map or depth inputs. Moreover, our video-based approach can effectively encode the historical observations of robots as spatio-temporal contexts for decision making and instruction following. We train NaVid with 510k navigation samples collected from continuous environments, including action-planning and instruction-reasoning samples, along with 763k large-scale web data. Extensive experiments show that NaVid achieves state-of-the-art performance in simulation environments and the real world, demonstrating superior cross-dataset and Sim2Real transfer. We thus believe our proposed VLM approach plans the next step for not only the navigation agents but also this research field.

Jiazhao Zhang, Kunyu Wang, Rongtao Xu, Gengze Zhou, Yicong Hong, Xiaomeng Fang, Qi Wu, Zhizheng Zhang, He Wang• 2024

Related benchmarks

TaskDatasetResultRank
Vision-Language NavigationR2R-CE (val-unseen)
Success Rate (SR)41.9
433
Vision-and-Language NavigationR2R (val unseen)
Success Rate (SR)37.4
344
Vision-Language NavigationRxR-CE (val-unseen)
SR45.7
280
Vision-and-Language NavigationREVERIE (val unseen)
SPL20.8
173
Vision-and-Language NavigationR2R-CE (val-seen)
SR43
49
Vision-and-Language NavigationR2R-CE unseen continuous (val)
SR37.4
35
Vertical PerceptionNavSpace
Navigation Error (NE)5.56
30
Precise MovementNavSpace
Navigation Error (NE)5.83
27
Vision-Language NavigationHA-VLN Unseen (val)
NE7.49
23
Vision-Language NavigationVLN-CE R2R (val unseen)
Navigation Error (NE)5.47
22
Showing 10 of 48 rows

Other info

Follow for update