Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Large Memory Layers with Product Keys

About

This paper introduces a structured memory which can be easily integrated into a neural network. The memory is very large by design and significantly increases the capacity of the architecture, by up to a billion parameters with a negligible computational overhead. Its design and access pattern is based on product keys, which enable fast and exact nearest neighbor search. The ability to increase the number of parameters while keeping the same computational budget lets the overall system strike a better trade-off between prediction accuracy and computation efficiency both at training and test time. This memory layer allows us to tackle very large scale language modeling tasks. In our experiments we consider a dataset with up to 30 billion words, and we plug our memory layer in a state-of-the-art transformer-based architecture. In particular, we found that a memory augmented model with only 12 layers outperforms a baseline transformer model with 24 layers, while being twice faster at inference time. We release our code for reproducibility purposes.

Guillaume Lample, Alexandre Sablayrolles, Marc'Aurelio Ranzato, Ludovic Denoyer, Herv\'e J\'egou• 2019

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy52.7
1460
Commonsense ReasoningWinoGrande
Accuracy56.7
776
Language UnderstandingMMLU
Accuracy36.3
756
Commonsense ReasoningPIQA
Accuracy73.8
647
Question AnsweringOBQA
Accuracy38.2
276
Question AnsweringARC
Accuracy53.6
154
Question AnsweringTriviaQA
Accuracy12.2
85
Showing 7 of 7 rows

Other info

Follow for update