Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

On the State of the Art of Evaluation in Neural Language Models

About

Ongoing innovations in recurrent neural network architectures have provided a steady influx of apparently state-of-the-art results on language modelling benchmarks. However, these have been evaluated using differing code bases and limited computational resources, which represent uncontrolled sources of experimental variation. We reevaluate several popular architectures and regularisation methods with large-scale automatic black-box hyperparameter tuning and arrive at the somewhat surprising conclusion that standard LSTM architectures, when properly regularised, outperform more recent models. We establish a new state of the art on the Penn Treebank and Wikitext-2 corpora, as well as strong baselines on the Hutter Prize dataset.

G\'abor Melis, Chris Dyer, Phil Blunsom• 2017

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-2 (test)
PPL65.9
1541
Language ModelingPTB (test)
Perplexity58.3
471
Language ModelingPenn Treebank (test)
Perplexity58.3
411
Language ModelingWikiText2 v1 (test)
Perplexity65.9
341
Language ModelingWikiText2 (val)
Perplexity (PPL)69.1
277
Character-level Language Modelingenwik8 (test)
BPC1.626
195
Language ModelingPenn Treebank (val)
Perplexity60.9
178
Language ModelingPTB (val)
Perplexity60.9
83
Language ModelingPenn Treebank word-level (test)
Perplexity58.3
72
Character-level Language ModelingHutter Prize Wikipedia (test)
Bits/Char1.3
28
Showing 10 of 15 rows

Other info

Follow for update