Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

cosFormer: Rethinking Softmax in Attention

About

Transformer has shown great successes in natural language processing, computer vision, and audio processing. As one of its core components, the softmax attention helps to capture long-range dependencies yet prohibits its scale-up due to the quadratic space and time complexity to the sequence length. Kernel methods are often adopted to reduce the complexity by approximating the softmax operator. Nevertheless, due to the approximation errors, their performances vary in different tasks/corpus and suffer crucial performance drops when compared with the vanilla softmax attention. In this paper, we propose a linear transformer called cosFormer that can achieve comparable or better accuracy to the vanilla transformer in both casual and cross attentions. cosFormer is based on two key properties of softmax attention: i). non-negativeness of the attention matrix; ii). a non-linear re-weighting scheme that can concentrate the distribution of the attention matrix. As its linear substitute, cosFormer fulfills these properties with a linear operator and a cosine-based distance re-weighting mechanism. Extensive experiments on language modeling and text understanding tasks demonstrate the effectiveness of our method. We further examine our method on long sequences and achieve state-of-the-art performance on the Long-Range Arena benchmark. The source code is available at https://github.com/OpenNLPLab/cosFormer.

Zhen Qin, Weixuan Sun, Hui Deng, Dongxu Li, Yunshen Wei, Baohong Lv, Junjie Yan, Lingpeng Kong, Yiran Zhong• 2022

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-103 (test)
Perplexity23.1
579
Natural Language UnderstandingGLUE
SST-291.05
531
Semantic segmentationADE20K
mIoU27.17
366
Image ClassificationImageNet-1k (val)
Top-1 Acc75.1
303
Language ModelingWikiText-103 (val)
PPL23.5
214
Long-range sequence modelingLong Range Arena (LRA)
Text Accuracy67.7
177
Offline Reinforcement LearningD4RL halfcheetah-medium-expert--
155
Offline Reinforcement LearningD4RL hopper-medium-expert--
153
Efficiency AnalysisLong Range Arena (LRA)
Steps per second96.46
84
Semantic segmentationCityscapes
mIoU40.56
82
Showing 10 of 42 rows

Other info

Code

Follow for update