Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Analyzing Generalization of Vision and Language Navigation to Unseen Outdoor Areas

About

Vision and language navigation (VLN) is a challenging visually-grounded language understanding task. Given a natural language navigation instruction, a visual agent interacts with a graph-based environment equipped with panorama images and tries to follow the described route. Most prior work has been conducted in indoor scenarios where best results were obtained for navigation on routes that are similar to the training routes, with sharp drops in performance when testing on unseen environments. We focus on VLN in outdoor scenarios and find that in contrast to indoor VLN, most of the gain in outdoor VLN on unseen data is due to features like junction type embedding or heading delta that are specific to the respective environment graph, while image information plays a very minor role in generalizing VLN to unseen outdoor areas. These findings show a bias to specifics of graph representations of urban environments, demanding that VLN tasks grow in scale and diversity of geographical environments.

Raphael Schumann, Stefan Riezler• 2022

Related benchmarks

TaskDatasetResultRank
Vision-Language NavigationTOUCHDOWN (dev)
Task Completion Rate (TC)30.05
17
Vision-Language NavigationTOUCHDOWN (test)
TC29.6
17
Vision-and-Language NavigationTouchdown Seen (test)
TC36.9
13
Vision-and-Language NavigationTouchdown Unseen (test)
nDTW26.3
11
Vision-and-Language Navigationmap2seq Seen (test)
nDTW62.3
10
Vision-and-Language Navigationmap2seq Unseen (test)
nDTW42.2
10
Vision-and-Language NavigationMap2seq (dev)
TC0.4988
10
Vision-and-Language NavigationMap2seq (test)
TC48.53
10
Vision-and-Language NavigationTouchdown seen (dev)
SDTW28.3
9
Vision-and-Language Navigationmap2seq unseen (dev)
nDTW8.9
8
Showing 10 of 16 rows

Other info

Code

Follow for update