Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

General Scene Adaptation for Vision-and-Language Navigation

About

Vision-and-Language Navigation (VLN) tasks mainly evaluate agents based on one-time execution of individual instructions across multiple environments, aiming to develop agents capable of functioning in any environment in a zero-shot manner. However, real-world navigation robots often operate in persistent environments with relatively consistent physical layouts, visual observations, and language styles from instructors. Such a gap in the task setting presents an opportunity to improve VLN agents by incorporating continuous adaptation to specific environments. To better reflect these real-world conditions, we introduce GSA-VLN, a novel task requiring agents to execute navigation instructions within a specific scene and simultaneously adapt to it for improved performance over time. To evaluate the proposed task, one has to address two challenges in existing VLN datasets: the lack of OOD data, and the limited number and style diversity of instructions for each scene. Therefore, we propose a new dataset, GSA-R2R, which significantly expands the diversity and quantity of environments and instructions for the R2R dataset to evaluate agent adaptability in both ID and OOD contexts. Furthermore, we design a three-stage instruction orchestration pipeline that leverages LLMs to refine speaker-generated instructions and apply role-playing techniques to rephrase instructions into different speaking styles. This is motivated by the observation that each individual user often has consistent signatures or preferences in their instructions. We conducted extensive experiments on GSA-R2R to thoroughly evaluate our dataset and benchmark various methods. Based on our findings, we propose a novel method, GR-DUET, which incorporates memory-based navigation graphs with an environment-specific training strategy, achieving state-of-the-art results on all GSA-R2R splits.

Haodong Hong, Yanyuan Qiao, Sen Wang, Jiajun Liu, Qi Wu• 2025

Related benchmarks

TaskDatasetResultRank
Vision-and-Language NavigationGSA-R2R N-Scene (test)
SR48.1
14
Vision-and-Language NavigationGSA-R2R R-Basic (test)
Trajectory Length9.4
10
Vision-and-Language NavigationGSA-R2R N-Basic (test)
TL8.9
10
Vision-Language NavigationGSA-R2R Child instructions
SR65.2
10
Vision-Language NavigationGSA-R2R Keith instructions
SR66.7
10
Vision-Language NavigationGSA-R2R Moira instructions
SR60.9
10
Vision-Language NavigationGSA-R2R Rachel instructions
SR67.1
10
Vision-Language NavigationGSA-R2R Sheldon instructions
SR63.9
10
Vision-and-Language NavigationGSA basic (test)
SR66.96
9
Showing 9 of 9 rows

Other info

Follow for update