Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A Recurrent Vision-and-Language BERT for Navigation

About

Accuracy of many visiolinguistic tasks has benefited significantly from the application of vision-and-language(V&L) BERT. However, its application for the task of vision-and-language navigation (VLN) remains limited. One reason for this is the difficulty adapting the BERT architecture to the partially observable Markov decision process present in VLN, requiring history-dependent attention and decision making. In this paper we propose a recurrent BERT model that is time-aware for use in VLN. Specifically, we equip the BERT model with a recurrent function that maintains cross-modal state information for the agent. Through extensive experiments on R2R and REVERIE we demonstrate that our model can replace more complex encoder-decoder models to achieve state-of-the-art results. Moreover, our approach can be generalised to other transformer-based architectures, supports pre-training, and is capable of solving navigation and referring expression tasks simultaneously.

Yicong Hong, Qi Wu, Yuankai Qi, Cristian Rodriguez-Opazo, Stephen Gould• 2020

Related benchmarks

TaskDatasetResultRank
Vision-Language NavigationR2R-CE (val-unseen)
Success Rate (SR)44
266
Vision-and-Language NavigationR2R (val unseen)
Success Rate (SR)63
260
Vision-Language NavigationRxR-CE (val-unseen)
SR40.5
172
Vision-and-Language NavigationREVERIE (val unseen)
SPL24.9
129
Vision-Language NavigationR2R (test unseen)
SR63
122
Vision-Language NavigationR2R (val seen)
Success Rate (SR)72
120
Vision-Language NavigationR2R Unseen (test)
SR63
116
Vision-and-Language NavigationR4R unseen (val)
Success Rate (SR)43.6
52
Vision-and-Language NavigationRoom-to-Room (R2R) Unseen (val)
SR63
52
Vision-and-Language NavigationR2R-CE (test-unseen)
SR42
50
Showing 10 of 46 rows

Other info

Code

Follow for update