Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Relational recurrent neural networks

About

Memory-based neural networks model temporal data by leveraging an ability to remember information for long periods. It is unclear, however, whether they also have an ability to perform complex relational reasoning with the information they remember. Here, we first confirm our intuitions that standard memory architectures may struggle at tasks that heavily involve an understanding of the ways in which entities are connected -- i.e., tasks involving relational reasoning. We then improve upon these deficits by using a new memory module -- a \textit{Relational Memory Core} (RMC) -- which employs multi-head dot product attention to allow memories to interact. Finally, we test the RMC on a suite of tasks that may profit from more capable relational reasoning across sequential information, and show large gains in RL domains (e.g. Mini PacMan), program evaluation, and language modeling, achieving state-of-the-art results on the WikiText-103, Project Gutenberg, and GigaWord datasets.

Adam Santoro, Ryan Faulkner, David Raposo, Jack Rae, Mike Chrzanowski, Theophane Weber, Daan Wierstra, Oriol Vinyals, Razvan Pascanu, Timothy Lillicrap• 2018

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-103 (test)
Perplexity31.6
524
Language ModelingWikiText-103 (val)
PPL30.8
180
Word-level Language ModelingWikiText-103 word-level (test)
Perplexity31.6
65
Sequential MNIST resolution generalizationSequential MNIST Resolution Generalization (test)
Accuracy (16x16)89.58
9
Copying TaskCopying Task 200 (test)
Cross-Entropy0.13
9
Copying TaskCopying Task 50 (train)
CE0.00e+0
9
Nth-farthestNth-farthest (test)
Accuracy91
6
AdditionAdd (test)
Per-Char Accuracy99.9
4
Control Flow EvaluationControl (test)
Per Character Accuracy99.6
4
Program EvaluationProgram (test)
Per Character Accuracy79
4
Showing 10 of 16 rows

Other info

Code

Follow for update