A Recurrent Vision-and-Language BERT for Navigation
About
Accuracy of many visiolinguistic tasks has benefited significantly from the application of vision-and-language(V&L) BERT. However, its application for the task of vision-and-language navigation (VLN) remains limited. One reason for this is the difficulty adapting the BERT architecture to the partially observable Markov decision process present in VLN, requiring history-dependent attention and decision making. In this paper we propose a recurrent BERT model that is time-aware for use in VLN. Specifically, we equip the BERT model with a recurrent function that maintains cross-modal state information for the agent. Through extensive experiments on R2R and REVERIE we demonstrate that our model can replace more complex encoder-decoder models to achieve state-of-the-art results. Moreover, our approach can be generalised to other transformer-based architectures, supports pre-training, and is capable of solving navigation and referring expression tasks simultaneously.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Vision-Language Navigation | R2R-CE (val-unseen) | Success Rate (SR)44 | 266 | |
| Vision-and-Language Navigation | R2R (val unseen) | Success Rate (SR)63 | 260 | |
| Vision-Language Navigation | RxR-CE (val-unseen) | SR40.5 | 172 | |
| Vision-and-Language Navigation | REVERIE (val unseen) | SPL24.9 | 129 | |
| Vision-Language Navigation | R2R (test unseen) | SR63 | 122 | |
| Vision-Language Navigation | R2R (val seen) | Success Rate (SR)72 | 120 | |
| Vision-Language Navigation | R2R Unseen (test) | SR63 | 116 | |
| Vision-and-Language Navigation | R4R unseen (val) | Success Rate (SR)43.6 | 52 | |
| Vision-and-Language Navigation | Room-to-Room (R2R) Unseen (val) | SR63 | 52 | |
| Vision-and-Language Navigation | R2R-CE (test-unseen) | SR42 | 50 |