Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Character-Level Language Modeling with Deeper Self-Attention

About

LSTMs and other RNN variants have shown strong performance on character-level language modeling. These models are typically trained using truncated backpropagation through time, and it is common to assume that their success stems from their ability to remember long-term contexts. In this paper, we show that a deep (64-layer) transformer model with fixed context outperforms RNN variants by a large margin, achieving state of the art on two popular benchmarks: 1.13 bits per character on text8 and 1.06 on enwik8. To get good results at this depth, we show that it is important to add auxiliary losses, both at intermediate network layers and intermediate sequence positions.

Rami Al-Rfou, Dokook Choe, Noah Constant, Mandy Guo, Llion Jones• 2018

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-103 (test)
Perplexity18.66
524
Character-level Language Modelingenwik8 (test)
BPC1.06
195
Character-level Language Modelingtext8 (test)
BPC1.13
128
Language ModelingOne Billion Word Benchmark (test)
Test Perplexity40.6
108
Character-level Language Modelingtext8
BPC1.13
16
Character-level Language Modelingtext8 (dev)
BPC1.06
13
Density modelingenwik8 (test)
Bits per Byte1.06
4
Showing 7 of 7 rows

Other info

Code

Follow for update