Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling

About

Transformer-based models are widely used in natural language processing (NLP). Central to the transformer model is the self-attention mechanism, which captures the interactions of token pairs in the input sequences and depends quadratically on the sequence length. Training such models on longer sequences is expensive. In this paper, we show that a Bernoulli sampling attention mechanism based on Locality Sensitive Hashing (LSH), decreases the quadratic complexity of such models to linear. We bypass the quadratic cost by considering self-attention as a sum of individual tokens associated with Bernoulli random variables that can, in principle, be sampled at once by a single hash (although in practice, this number may be a small constant). This leads to an efficient sampling scheme to estimate self-attention which relies on specific modifications of LSH (to enable deployment on GPU architectures). We evaluate our algorithm on the GLUE benchmark with standard 512 sequence length where we see favorable performance relative to a standard pretrained Transformer. On the Long Range Arena (LRA) benchmark, for evaluating performance on long sequences, our method achieves results consistent with softmax self-attention but with sizable speed-ups and memory savings and often outperforms other efficient self-attention methods. Our code is available at https://github.com/mlpen/YOSO

Zhanpeng Zeng, Yunyang Xiong, Sathya N. Ravi, Shailesh Acharya, Glenn Fung, Vikas Singh• 2021

Related benchmarks

TaskDatasetResultRank
Long-range sequence modelingLong Range Arena (LRA) (test)
Accuracy (Avg)59.2
158
Time-series classificationSelfRegulationSCP2
Accuracy53.9
55
Time-series classificationHeartbeat
Accuracy76.5
51
Time-series classificationUWaveGestureLibrary
Accuracy88.4
47
Time-series classificationSelfRegulationSCP1
Accuracy91.1
45
Time-series classificationPEMS-SF
Accuracy85.2
45
ClassificationLRA ListOps N=2000 (test)
Accuracy41
39
Time-series classificationFaceDetection
Accuracy67.3
34
Time-series classificationSpokenArabicDigits
Accuracy98.9
28
Time-series classificationJapaneseVowels
Accuracy98.6
28
Showing 10 of 17 rows

Other info

Follow for update