Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Tactical Rewind: Self-Correction via Backtracking in Vision-and-Language Navigation

About

We present the Frontier Aware Search with backTracking (FAST) Navigator, a general framework for action decoding, that achieves state-of-the-art results on the Room-to-Room (R2R) Vision-and-Language navigation challenge of Anderson et. al. (2018). Given a natural language instruction and photo-realistic image views of a previously unseen environment, the agent was tasked with navigating from source to target location as quickly as possible. While all current approaches make local action decisions or score entire trajectories using beam search, ours balances local and global signals when exploring an unobserved environment. Importantly, this lets us act greedily but use global signals to backtrack when necessary. Applying FAST framework to existing state-of-the-art models achieved a 17% relative gain, an absolute 6% gain on Success rate weighted by Path Length (SPL).

Liyiming Ke, Xiujun Li, Yonatan Bisk, Ari Holtzman, Zhe Gan, Jingjing Liu, Jianfeng Gao, Yejin Choi, Siddhartha Srinivasa• 2019

Related benchmarks

TaskDatasetResultRank
Vision-and-Language NavigationR2R (val unseen)
Success Rate (SR)63
260
Vision-Language NavigationR2R (test unseen)
SR61
122
Vision-Language NavigationR2R (val seen)
Success Rate (SR)70
120
Vision-Language NavigationR2R Unseen (test)
SR61
116
Vision-and-Language NavigationRoom-to-Room (R2R) Unseen (val)
SR63
52
Vision-and-Language NavigationR4R unseen (val)
Success Rate (SR)13.3
52
NavigationREVERIE Unseen (test)
SR14.18
43
NavigationREVERIE (val unseen)
Success Rate (SR)10.08
34
Remote GroundingREVERIE Unseen (test)
RGS7.07
33
Vision-and-Language NavigationRoom-to-Room (R2R) Seen (val)
NE (Navigation Error)3.13
32
Showing 10 of 23 rows

Other info

Code

Follow for update