Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Improving Neural Language Models with a Continuous Cache

About

We propose an extension to neural network language models to adapt their prediction to the recent history. Our model is a simplified version of memory augmented networks, which stores past hidden activations as memory and accesses them through a dot product with the current hidden activation. This mechanism is very efficient and scales to very large memory sizes. We also draw a link between the use of external memory in neural network and cache models used with count based language models. We demonstrate on several language model datasets that our approach performs significantly better than recent memory augmented networks.

Edouard Grave, Armand Joulin, Nicolas Usunier• 2016

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-2 (test)
PPL68.9
1949
Language ModelingWikiText-103 (test)
Perplexity18.27
579
Language ModelingPenn Treebank (test)
Perplexity72.1
411
Language ModelingWikiText2 (val)
Perplexity (PPL)72.1
387
Language ModelingWikiText2 v1 (test)
Perplexity68.9
383
Language ModelingPenn Treebank (val)
Perplexity74.6
178
Language ModelingLAMBADA
Perplexity138
150
Character-level Language Modelingtext8 (test)
BPC1.43
128
Language ModelingOne Billion Word Benchmark (test)
Test Perplexity30.6
113
Language ModelingPenn Treebank word-level (test)
Perplexity72.1
72
Showing 10 of 17 rows

Other info

Follow for update