Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

NavGPT: Explicit Reasoning in Vision-and-Language Navigation with Large Language Models

About

Trained with an unprecedented scale of data, large language models (LLMs) like ChatGPT and GPT-4 exhibit the emergence of significant reasoning abilities from model scaling. Such a trend underscored the potential of training LLMs with unlimited language data, advancing the development of a universal embodied agent. In this work, we introduce the NavGPT, a purely LLM-based instruction-following navigation agent, to reveal the reasoning capability of GPT models in complex embodied scenes by performing zero-shot sequential action prediction for vision-and-language navigation (VLN). At each step, NavGPT takes the textual descriptions of visual observations, navigation history, and future explorable directions as inputs to reason the agent's current status, and makes the decision to approach the target. Through comprehensive experiments, we demonstrate NavGPT can explicitly perform high-level planning for navigation, including decomposing instruction into sub-goal, integrating commonsense knowledge relevant to navigation task resolution, identifying landmarks from observed scenes, tracking navigation progress, and adapting to exceptions with plan adjustment. Furthermore, we show that LLMs is capable of generating high-quality navigational instructions from observations and actions along a path, as well as drawing accurate top-down metric trajectory given the agent's navigation history. Despite the performance of using NavGPT to zero-shot R2R tasks still falling short of trained models, we suggest adapting multi-modality inputs for LLMs to use as visual navigation agents and applying the explicit reasoning of LLMs to benefit learning-based models.

Gengze Zhou, Yicong Hong, Qi Wu• 2023

Related benchmarks

TaskDatasetResultRank
Vision-and-Language NavigationR2R (val unseen)
Success Rate (SR)34
260
Vision-and-Language NavigationREVERIE (val unseen)
SPL16.6
129
Vision-and-Language NavigationR4R unseen (val)
Success Rate (SR)15
52
Vision-and-Language NavigationR2R (val seen)
Success Rate (SR)15.77
51
Vision-and-Language NavigationR2R (test)
SPL (Success weighted Path Length)13
38
Zero-Shot Aerial NavigationAerialVLN (test)
Success Rate (SR)34.04
18
Vision-and-Language NavigationLarge Real-Floor-Plan Environments
SPL0.29
16
Vision-and-Language NavigationSmall Synthetic Environments
SPL0.38
16
Vision-and-Language NavigationR2R Discrete (val-unseen)
Navigation Error (NE)6.46
12
Vision-and-Language NavigationREVERIE Unseen Discrete (val)
OSR28.3
10
Showing 10 of 14 rows

Other info

Follow for update