BabyAI: A Platform to Study the Sample Efficiency of Grounded Language Learning
About
Allowing humans to interactively train artificial agents to understand language instructions is desirable for both practical and scientific reasons, but given the poor data efficiency of the current learning methods, this goal may require substantial research efforts. Here, we introduce the BabyAI research platform to support investigations towards including humans in the loop for grounded language learning. The BabyAI platform comprises an extensible suite of 19 levels of increasing difficulty. The levels gradually lead the agent towards acquiring a combinatorially rich synthetic language which is a proper subset of English. The platform also provides a heuristic expert agent for the purpose of simulating a human teacher. We report baseline results and estimate the amount of human involvement that would be required to train a neural network-based agent on some of the BabyAI levels. We put forward strong evidence that current deep learning methods are not yet sufficiently sample efficient when it comes to learning a language with compositional properties.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Instruction Following | BabyAI BossLevel | Success Rate20.6 | 14 | |
| Imitation Learning | BabyAI BossLevel (test) | Success Rate45.3 | 9 | |
| Imitation Learning | BabyAI GoToSeq (test) | Success Rate47.1 | 9 | |
| Imitation Learning | BabyAI SynthSeq (test) | Success Rate0.404 | 9 | |
| Synthseq | BabyAI | Average Pass Rate20 | 7 | |
| Four Rooms | MiniGrid | Average Pass Rate65 | 7 | |
| Goto | BabyAI | Average Pass Rate0.302 | 7 | |
| Pickup | BabyAI | Average Pass Rate15.2 | 7 | |
| Bosslevel | BabyAI | Average Pass Rate0.238 | 7 | |
| Instruction Following | BabyAI Goto | Average Episodic Reward0.246 | 7 |