Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Scatterbrain: Unifying Sparse and Low-rank Attention Approximation

About

Recent advances in efficient Transformers have exploited either the sparsity or low-rank properties of attention matrices to reduce the computational and memory bottlenecks of modeling long sequences. However, it is still challenging to balance the trade-off between model quality and efficiency to perform a one-size-fits-all approximation for different tasks. To better understand this trade-off, we observe that sparse and low-rank approximations excel in different regimes, determined by the softmax temperature in attention, and sparse + low-rank can outperform each individually. Inspired by the classical robust-PCA algorithm for sparse and low-rank decomposition, we propose Scatterbrain, a novel way to unify sparse (via locality sensitive hashing) and low-rank (via kernel feature map) attention for accurate and efficient approximation. The estimation is unbiased with provably low error. We empirically show that Scatterbrain can achieve 2.1x lower error than baselines when serving as a drop-in replacement in BigGAN image generation and pre-trained T2T-ViT. On a pre-trained T2T Vision transformer, even without fine-tuning, Scatterbrain can reduce 98% of attention memory at the cost of only 1% drop in accuracy. We demonstrate Scatterbrain for end-to-end training with up to 4 points better perplexity and 5 points better average accuracy than sparse or low-rank efficient transformers on language modeling and long-range-arena tasks.

Beidi Chen, Tri Dao, Eric Winsor, Zhao Song, Atri Rudra, Christopher R\'e• 2021

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-103 (test)
Perplexity26.72
524
Natural Language UnderstandingGLUE
SST-292.7
452
Long-range sequence modelingLong Range Arena (LRA)
Text Accuracy64.55
164
Language ModelingWikiText-103
PPL26.72
146
Image ClassificationImageNet (val)
Top-1 Accuracy80.7
5
Language ModelingCopy
PPL2.58
4
Language ModelingCopy (test)
Perplexity (PPL)2.58
4
Showing 7 of 7 rows

Other info

Code

Follow for update