Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Room-Across-Room: Multilingual Vision-and-Language Navigation with Dense Spatiotemporal Grounding

About

We introduce Room-Across-Room (RxR), a new Vision-and-Language Navigation (VLN) dataset. RxR is multilingual (English, Hindi, and Telugu) and larger (more paths and instructions) than other VLN datasets. It emphasizes the role of language in VLN by addressing known biases in paths and eliciting more references to visible entities. Furthermore, each word in an instruction is time-aligned to the virtual poses of instruction creators and validators. We establish baseline scores for monolingual and multilingual settings and multitask learning when including Room-to-Room annotations. We also provide results for a model that learns from synchronized pose traces by focusing only on portions of the panorama attended to in human demonstrations. The size, scope and detail of RxR dramatically expands the frontier for research on embodied language agents in simulated, photo-realistic environments.

Alexander Ku, Peter Anderson, Roma Patel, Eugene Ie, Jason Baldridge• 2020

Related benchmarks

TaskDatasetResultRank
Vision-and-Language NavigationR2R (val unseen)
Success Rate (SR)37
260
Vision-Language NavigationRxR-CE (val-unseen)
SR25.6
172
Vision-Language NavigationR2R (test unseen)
SR86
122
Vision-and-Language NavigationRxR (Room-Across-Room) unseen (val)
SR (Success Rate)26.1
26
Vision-and-Language NavigationRxR seen (val)
SR25.2
21
Vision-and-Language NavigationRoom-across-Room (RxR) unseen (test)
SR93.9
16
Vision-and-Language NavigationRxR (Room-Across-Room) seen (val)
SR28.6
9
NavigationRxR (test)
PL Score17.05
6
Showing 8 of 8 rows

Other info

Code

Follow for update