Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ALFRED: A Benchmark for Interpreting Grounded Instructions for Everyday Tasks

About

We present ALFRED (Action Learning From Realistic Environments and Directives), a benchmark for learning a mapping from natural language instructions and egocentric vision to sequences of actions for household tasks. ALFRED includes long, compositional tasks with non-reversible state changes to shrink the gap between research benchmarks and real-world applications. ALFRED consists of expert demonstrations in interactive visual environments for 25k natural language directives. These directives contain both high-level goals like "Rinse off a mug and place it in the coffee maker." and low-level language instructions like "Walk to the coffee maker on the right." ALFRED tasks are more complex in terms of sequence length, action space, and language than existing vision-and-language task datasets. We show that a baseline model based on recent embodied vision-and-language tasks performs poorly on ALFRED, suggesting that there is significant room for developing innovative grounded visual language understanding models with this benchmark.

Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, Dieter Fox• 2019

Related benchmarks

TaskDatasetResultRank
Continual Instruction FollowingALFRED
Success Rate (SR)0.1
28
Instruction FollowingALFRED (test-unseen)
GC7.03
23
Embodied Instruction FollowingALFRED seen 1.0 (test)
GC9.42
20
Embodied Task CompletionALFRED unseen (test)
Success Rate39
14
Embodied Task CompletionALFRED seen (test)
Success Rate (SR)3.98
14
Subtask CompletionALFRED
Avg Completion Rate0.39
4
Showing 6 of 6 rows

Other info

Follow for update