Learning to Reason with Third-Order Tensor Products
About
We combine Recurrent Neural Networks with Tensor Product Representations to learn combinatorial representations of sequential data. This improves symbolic interpretation and systematic generalisation. Our architecture is trained end-to-end through gradient descent on a variety of simple natural language reasoning tasks, significantly outperforming the latest state-of-the-art models in single-task and all-tasks settings. We also augment a subset of the data such that training and test data exhibit large systematic differences and show that our approach generalises better than the previous state-of-the-art.
Imanol Schlag, J\"urgen Schmidhuber• 2018
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Question Answering | bAbI (test) | Mean Error1.34 | 54 | |
| sys-bAbI task | sys-bAbI original (test) | Gap7.95 | 22 | |
| Multi-hop spatial reasoning | StepGame with distracting noise (test) | k=1 Accuracy70.29 | 6 | |
| Multi-hop spatial reasoning | StepGame larger k generalization (test) | Accuracy (k=6)22.25 | 6 | |
| Spatial Reasoning | bAbI original (test) | Task 17 Accuracy97.55 | 6 | |
| Nth-farthest | Nth-farthest (test) | Accuracy13 | 6 |
Showing 6 of 6 rows