Attention is Not All You Need: Pure Attention Loses Rank Doubly Exponentially with Depth
About
Attention-based architectures have become ubiquitous in machine learning, yet our understanding of the reasons for their effectiveness remains limited. This work proposes a new way to understand self-attention networks: we show that their output can be decomposed into a sum of smaller terms, each involving the operation of a sequence of attention heads across layers. Using this decomposition, we prove that self-attention possesses a strong inductive bias towards "token uniformity". Specifically, without skip connections or multi-layer perceptrons (MLPs), the output converges doubly exponentially to a rank-1 matrix. On the other hand, skip connections and MLPs stop the output from degeneration. Our experiments verify the identified convergence phenomena on different variants of standard transformer architectures.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Commonsense Reasoning | HellaSwag | Accuracy33.1 | 1891 | |
| Commonsense Reasoning | WinoGrande | Accuracy49.5 | 1085 | |
| Commonsense Reasoning | PIQA | Accuracy65.1 | 751 | |
| Language Modeling | LAMBADA | Accuracy30.9 | 268 | |
| Language Modeling | Arxiv Proof-pile | Perplexity17.1 | 40 | |
| Language Modeling | Pubmed | Perplexity18.05 | 38 | |
| Copy | Copy OOD lengths: 2x, 4x, 8x, 16x, 32x, 64x | Exact Match Accuracy100 | 30 | |
| MQMTAR | MQMTAR OOD lengths 2x 4x 16x 64x 256x 1024x | Exact Match Accuracy100 | 30 | |
| Reverse | Reverse OOD lengths: 1.5x, 2x, 4x, 8x | Exact Match Accuracy36 | 20 | |
| Sort | Sort OOD lengths: 2x, 4x, 8x | Exact Match Accuracy0.00e+0 | 15 |