Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Rethinking Attention: Polynomial Alternatives to Softmax in Transformers

About

This paper questions whether the strong performance of softmax attention in transformers stems from producing a probability distribution over inputs. Instead, we argue that softmax's effectiveness lies in its implicit regularization of the Frobenius norm of the attention matrix, which stabilizes training. Motivated by this, we explore alternative activations, specifically polynomials, that achieve a similar regularization effect. Our theoretical analysis shows that certain polynomials can serve as effective substitutes for softmax, achieving strong performance across transformer applications despite violating softmax's typical properties of positivity, normalization, and sparsity. Extensive experiments support these findings, offering a new perspective on attention mechanisms.

Hemanth Saratchandran, Jianqiao Zheng, Yiping Ji, Wenbo Zhang, Simon Lucey• 2024

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet-1k (val)
Top-1 Accuracy83.6
1469
Long-range sequence modelingLong Range Arena (LRA) (test)--
158
Object DetectionCOCO mini (val)
AP45.1
132
Instance SegmentationCOCO mini (val)
AP^m40.4
72
Modeling Partial Differential EquationsConvection PDE 1-dimensional (test)
Loss4.10e-5
4
Modeling Partial Differential Equations1D-Reaction PDE (test)
Loss2.80e-6
4
Showing 6 of 6 rows

Other info

Follow for update