Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Speaker-Follower Models for Vision-and-Language Navigation

About

Navigation guided by natural language instructions presents a challenging reasoning problem for instruction followers. Natural language instructions typically identify only a few high-level decisions and landmarks rather than complete low-level motor behaviors; much of the missing information must be inferred based on perceptual context. In machine learning settings, this is doubly challenging: it is difficult to collect enough annotated data to enable learning of this reasoning process from scratch, and also difficult to implement the reasoning process using generic sequence models. Here we describe an approach to vision-and-language navigation that addresses both these issues with an embedded speaker model. We use this speaker model to (1) synthesize new instructions for data augmentation and to (2) implement pragmatic reasoning, which evaluates how well candidate action sequences explain an instruction. Both steps are supported by a panoramic action space that reflects the granularity of human-generated instructions. Experiments show that all three components of this approach---speaker-driven data augmentation, pragmatic reasoning and panoramic action space---dramatically improve the performance of a baseline instruction follower, more than doubling the success rate over the best existing approach on a standard benchmark.

Daniel Fried, Ronghang Hu, Volkan Cirik, Anna Rohrbach, Jacob Andreas, Louis-Philippe Morency, Taylor Berg-Kirkpatrick, Kate Saenko, Dan Klein, Trevor Darrell• 2018

Related benchmarks

TaskDatasetResultRank
Vision-and-Language NavigationR2R (val unseen)
Success Rate (SR)36
260
Vision-Language NavigationR2R (test unseen)
SR54
122
Vision-Language NavigationR2R (val seen)
Success Rate (SR)66
120
Vision-Language NavigationR2R Unseen (test)
SR53
116
Vision-and-Language NavigationRoom-to-Room (R2R) Unseen (val)
SR50
52
Vision-and-Language NavigationR4R unseen (val)
Success Rate (SR)24.9
52
Vision-and-Language NavigationR2R (val seen)
Success Rate (SR)66
51
Vision-and-Language NavigationR2R (test)
SPL (Success weighted Path Length)28
38
Vision-and-Language NavigationRoom-to-Room (R2R) Seen (val)
NE (Navigation Error)3.36
32
Vision-Language NavigationR2R unseen v1.0 (val)
SR2.19e+3
24
Showing 10 of 50 rows

Other info

Follow for update