Value Iteration Networks
About
We introduce the value iteration network (VIN): a fully differentiable neural network with a `planning module' embedded within. VINs can learn to plan, and are suitable for predicting outcomes that involve planning-based reasoning, such as policies for reinforcement learning. Key to our approach is a novel differentiable approximation of the value-iteration algorithm, which can be represented as a convolutional neural network, and trained end-to-end using standard backpropagation. We evaluate VIN based policies on discrete and continuous path-planning domains, and on a natural-language based search task. We show that by learning an explicit planning computation, VIN policies generalize better to new, unseen domains.
Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, Pieter Abbeel• 2016
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Navigation | MiniWorld MazeS3 | Success Rate90.3 | 14 | |
| Navigation | MiniWorld 8x8 mazes (unseen) | Success Rate41.2 | 5 | |
| Semantic Navigation | Active Vision Dataset (AVD) (train) | Success Rate61.6 | 3 | |
| Semantic Navigation | Active Vision Dataset (AVD) (val) | Success Rate45 | 3 |
Showing 4 of 4 rows