Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Generating Long Sequences with Sparse Transformers

About

Transformers are powerful sequence models, but require time and memory that grows quadratically with the sequence length. In this paper we introduce sparse factorizations of the attention matrix which reduce this to $O(n \sqrt{n})$. We also introduce a) a variation on architecture and initialization to train deeper networks, b) the recomputation of attention matrices to save memory, and c) fast attention kernels for training. We call networks with these changes Sparse Transformers, and show they can model sequences tens of thousands of timesteps long using hundreds of layers. We use the same architecture to model images, audio, and text from raw bytes, setting a new state of the art for density modeling of Enwik8, CIFAR-10, and ImageNet-64. We generate unconditional samples that demonstrate global coherence and great diversity, and show it is possible in principle to use self-attention to model sequences of length one million or more.

Rewon Child, Scott Gray, Alec Radford, Ilya Sutskever• 2019

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-10--
471
Image GenerationCIFAR-10 (test)--
471
Unconditional Image GenerationCIFAR-10 (test)--
216
Character-level Language Modelingenwik8 (test)
BPC0.99
195
Long-range sequence modelingLong Range Arena (LRA)
Text Accuracy63.58
164
Long-range sequence modelingLong Range Arena (LRA) (test)
Accuracy (Avg)51.2
158
Language ModelingWikiText-103
PPL20.5
146
Density EstimationCIFAR-10 (test)
Bits/dim2.8
134
Long sequence classificationLRA (Long Range Arena) (test)
Average Accuracy57.42
92
Efficiency AnalysisLong Range Arena (LRA)
Steps per second78.3
84
Showing 10 of 55 rows

Other info

Follow for update