Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Target-Driven Structured Transformer Planner for Vision-Language Navigation

About

Vision-language navigation is the task of directing an embodied agent to navigate in 3D scenes with natural language instructions. For the agent, inferring the long-term navigation target from visual-linguistic clues is crucial for reliable path planning, which, however, has rarely been studied before in literature. In this article, we propose a Target-Driven Structured Transformer Planner (TD-STP) for long-horizon goal-guided and room layout-aware navigation. Specifically, we devise an Imaginary Scene Tokenization mechanism for explicit estimation of the long-term target (even located in unexplored environments). In addition, we design a Structured Transformer Planner which elegantly incorporates the explored room layout into a neural attention architecture for structured and global planning. Experimental results demonstrate that our TD-STP substantially improves previous best methods' success rate by 2% and 5% on the test set of R2R and REVERIE benchmarks, respectively. Our code is available at https://github.com/YushengZhao/TD-STP .

Yusheng Zhao, Jinyu Chen, Chen Gao, Wenguan Wang, Lirong Yang, Haibing Ren, Huaxia Xia, Si Liu• 2022

Related benchmarks

TaskDatasetResultRank
Vision-and-Language NavigationR2R (val unseen)
Success Rate (SR)70
344
Vision-and-Language NavigationREVERIE (val unseen)
SPL27.32
173
Vision-Language NavigationR2R Unseen (test)
SR67
134
Vision-and-Language NavigationR2R (val seen)
Success Rate (SR)77
68
Vision-and-Language NavigationREVERIE Unseen (test)
Success Rate (SR)35.89
59
Vision-Language NavigationR2R unseen v1.0 (val)
SR70
37
Vision-Language NavigationR2R 1 (test unseen)
Success Rate0.67
18
Remote Object GroundingREVERIE (test unseen)
OSR40.26
15
Remote Object GroundingREVERIE (val unseen)
OSR39.48
15
Showing 9 of 9 rows

Other info

Follow for update