Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

General Scene Adaptation for Vision-and-Language Navigation

About

Vision-and-Language Navigation (VLN) tasks mainly evaluate agents based on one-time execution of individual instructions across multiple environments, aiming to develop agents capable of functioning in any environment in a zero-shot manner. However, real-world navigation robots often operate in persistent environments with relatively consistent physical layouts, visual observations, and language styles from instructors. Such a gap in the task setting presents an opportunity to improve VLN agents by incorporating continuous adaptation to specific environments. To better reflect these real-world conditions, we introduce GSA-VLN, a novel task requiring agents to execute navigation instructions within a specific scene and simultaneously adapt to it for improved performance over time. To evaluate the proposed task, one has to address two challenges in existing VLN datasets: the lack of OOD data, and the limited number and style diversity of instructions for each scene. Therefore, we propose a new dataset, GSA-R2R, which significantly expands the diversity and quantity of environments and instructions for the R2R dataset to evaluate agent adaptability in both ID and OOD contexts. Furthermore, we design a three-stage instruction orchestration pipeline that leverages LLMs to refine speaker-generated instructions and apply role-playing techniques to rephrase instructions into different speaking styles. This is motivated by the observation that each individual user often has consistent signatures or preferences in their instructions. We conducted extensive experiments on GSA-R2R to thoroughly evaluate our dataset and benchmark various methods. Based on our findings, we propose a novel method, GR-DUET, which incorporates memory-based navigation graphs with an environment-specific training strategy, achieving state-of-the-art results on all GSA-R2R splits.

Haodong Hong, Yanyuan Qiao, Sen Wang, Jiajun Liu, Qi Wu• 2025

Related benchmarks

TaskDatasetResultRank
Vision-and-Language NavigationGSA-R2R N-Scene (test)
SR48.1
26
Vision-and-Language NavigationIR2R (val-unseen)
TL11
21
Vision-and-Language NavigationIR2R (val-seen)
Trajectory Length (TL)12.5
21
Vision-Language NavigationGSA-R2R User Instructions Residential v1 (test)
SR64.8
12
Vision-Language NavigationGSA-R2R Basic Instructions Residential v1 (test)
SR69.3
12
Vision-Language NavigationGSA-R2R Basic Instructions Non-Residential v1 (test)
SR56.6
12
Vision-and-Language NavigationGSA-R2R R-Basic (test)
Trajectory Length9.4
10
Vision-and-Language NavigationGSA-R2R N-Basic (test)
TL8.9
10
Vision-Language NavigationGSA-R2R Child instructions
SR65.2
10
Vision-Language NavigationGSA-R2R Keith instructions
SR66.7
10
Showing 10 of 14 rows

Other info

Follow for update