Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Curriculum Learning for Vision-and-Language Navigation

About

Vision-and-Language Navigation (VLN) is a task where an agent navigates in an embodied indoor environment under human instructions. Previous works ignore the distribution of sample difficulty and we argue that this potentially degrade their agent performance. To tackle this issue, we propose a novel curriculum-based training paradigm for VLN tasks that can balance human prior knowledge and agent learning progress about training samples. We develop the principle of curriculum design and re-arrange the benchmark Room-to-Room (R2R) dataset to make it suitable for curriculum training. Experiments show that our method is model-agnostic and can significantly improve the performance, the generalizability, and the training efficiency of current state-of-the-art navigation agents without increasing model complexity.

Jiwen Zhang, Zhongyu Wei, Jianqing Fan, Jiajie Peng• 2021

Related benchmarks

TaskDatasetResultRank
Vision-and-Language NavigationR2R (val unseen)
Success Rate (SR)47.6
260
Vision-and-Language NavigationR2R (val seen)
Success Rate (SR)61
51
Vision-and-Language NavigationR2R (test)
SPL (Success weighted Path Length)45.5
38
Showing 3 of 3 rows

Other info

Follow for update