Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Mapping Instructions to Actions in 3D Environments with Visual Goal Prediction

About

We propose to decompose instruction execution to goal prediction and action generation. We design a model that maps raw visual observations to goals using LINGUNET, a language-conditioned image generation network, and then generates the actions required to complete them. Our model is trained from demonstration only without external resources. To evaluate our approach, we introduce two benchmarks for instruction following: LANI, a navigation task; and CHAI, where an agent executes household instructions. Our evaluation demonstrates the advantages of our model decomposition, and illustrates the challenges posed by our new benchmarks.

Dipendra Misra, Andrew Bennett, Valts Blukis, Eyvind Niklasson, Max Shatkhin, Yoav Artzi• 2018

Related benchmarks

TaskDatasetResultRank
Vision-Language NavigationAerialVLN S (val seen)
Navigation Error (NE)383.8
13
Vision-Language NavigationAerialVLN unseen S (val)
Navigation Error (NE)368.4
13
Showing 2 of 2 rows

Other info

Follow for update