Dynamic Evaluation of Transformer Language Models
About
This research note combines two methods that have recently improved the state of the art in language modeling: Transformers and dynamic evaluation. Transformers use stacked layers of self-attention that allow them to capture long range dependencies in sequential data. Dynamic evaluation fits models to the recent sequence history, allowing them to assign higher probabilities to re-occurring sequential patterns. By applying dynamic evaluation to Transformer-XL models, we improve the state of the art on enwik8 from 0.99 to 0.94 bits/char, text8 from 1.08 to 1.04 bits/char, and WikiText-103 from 18.3 to 16.4 perplexity points.
Ben Krause, Emmanuel Kahembwe, Iain Murray, Steve Renals• 2019
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Character-level Language Modeling | enwik8 (test) | BPC0.94 | 195 | |
| Language Modeling | WikiText-103 (val) | PPL15.8 | 180 | |
| Character-level Language Modeling | text8 (test) | BPC1.038 | 128 | |
| Word-level Language Modeling | WikiText-103 word-level (test) | Perplexity16.4 | 65 |
Showing 4 of 4 rows