Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Dynamic Evaluation of Transformer Language Models

About

This research note combines two methods that have recently improved the state of the art in language modeling: Transformers and dynamic evaluation. Transformers use stacked layers of self-attention that allow them to capture long range dependencies in sequential data. Dynamic evaluation fits models to the recent sequence history, allowing them to assign higher probabilities to re-occurring sequential patterns. By applying dynamic evaluation to Transformer-XL models, we improve the state of the art on enwik8 from 0.99 to 0.94 bits/char, text8 from 1.08 to 1.04 bits/char, and WikiText-103 from 18.3 to 16.4 perplexity points.

Ben Krause, Emmanuel Kahembwe, Iain Murray, Steve Renals• 2019

Related benchmarks

TaskDatasetResultRank
Character-level Language Modelingenwik8 (test)
BPC0.94
195
Language ModelingWikiText-103 (val)
PPL15.8
180
Character-level Language Modelingtext8 (test)
BPC1.038
128
Word-level Language ModelingWikiText-103 word-level (test)
Perplexity16.4
65
Showing 4 of 4 rows

Other info

Code

Follow for update