Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Rethinking Attention with Performers

About

We introduce Performers, Transformer architectures which can estimate regular (softmax) full-rank-attention Transformers with provable accuracy, but using only linear (as opposed to quadratic) space and time complexity, without relying on any priors such as sparsity or low-rankness. To approximate softmax attention-kernels, Performers use a novel Fast Attention Via positive Orthogonal Random features approach (FAVOR+), which may be of independent interest for scalable kernel methods. FAVOR+ can be also used to efficiently model kernelizable attention mechanisms beyond softmax. This representational power is crucial to accurately compare softmax with other kernels for the first time on large-scale tasks, beyond the reach of regular Transformers, and investigate optimal attention-kernels. Performers are linear architectures fully compatible with regular Transformers and with strong theoretical guarantees: unbiased or nearly-unbiased estimation of the attention matrix, uniform convergence and low estimation variance. We tested Performers on a rich set of tasks stretching from pixel-prediction through text models to protein sequence modeling. We demonstrate competitive results with other examined efficient sparse and dense attention methods, showcasing effectiveness of the novel attention-learning paradigm leveraged by Performers.

Krzysztof Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamas Sarlos, Peter Hawkins, Jared Davis, Afroz Mohiuddin, Lukasz Kaiser, David Belanger, Lucy Colwell, Adrian Weller• 2020

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)
Accuracy73.11
3518
Image ClassificationCIFAR-10 (test)
Accuracy91.58
3381
Language ModelingPTB
Perplexity49.1
1034
Image ClassificationImageNet-1k (val)
Top-1 Accuracy79.5
844
Language ModelingWikiText-103 (test)
Perplexity26.8
579
Natural Language UnderstandingGLUE
SST-283.8
531
Natural Language UnderstandingGLUE (dev)
SST-2 (Acc)91.97
518
Semantic segmentationADE20K
mIoU41.16
366
Image ClassificationImageNet-1k (val)
Top-1 Acc75.2
303
Image GenerationImageNet (val)
Inception Score38.07
247
Showing 10 of 112 rows
...

Other info

Code

Follow for update