Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
About
Transformers have a potential of learning longer-term dependency, but are limited by a fixed-length context in the setting of language modeling. We propose a novel neural architecture Transformer-XL that enables learning dependency beyond a fixed length without disrupting temporal coherence. It consists of a segment-level recurrence mechanism and a novel positional encoding scheme. Our method not only enables capturing longer-term dependency, but also resolves the context fragmentation problem. As a result, Transformer-XL learns dependency that is 80% longer than RNNs and 450% longer than vanilla Transformers, achieves better performance on both short and long sequences, and is up to 1,800+ times faster than vanilla Transformers during evaluation. Notably, we improve the state-of-the-art results of bpc/perplexity to 0.99 on enwiki8, 1.08 on text8, 18.3 on WikiText-103, 21.8 on One Billion Word, and 54.5 on Penn Treebank (without finetuning). When trained only on WikiText-103, Transformer-XL manages to generate reasonably coherent, novel text articles with thousands of tokens. Our code, pretrained models, and hyperparameters are available in both Tensorflow and PyTorch.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Language Modeling | WikiText-103 (test) | Perplexity18.1 | 524 | |
| Language Modeling | PTB (test) | Perplexity54.5 | 471 | |
| Language Modeling | Penn Treebank (test) | Perplexity55.41 | 411 | |
| Language Modeling | WikiText2 v1 (test) | Perplexity64.85 | 341 | |
| Character-level Language Modeling | enwik8 (test) | BPC0.98 | 195 | |
| Language Modeling | WikiText-103 (val) | PPL17.3 | 180 | |
| Language Modeling | Penn Treebank (val) | Perplexity57.93 | 178 | |
| Language Modeling | WikiText-103 | PPL18.4 | 146 | |
| Language Modeling | arXiv (test) | PPL8.21 | 137 | |
| Character-level Language Modeling | text8 (test) | BPC1.08 | 128 |