Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Combiner: Full Attention Transformer with Sparse Computation Cost

About

Transformers provide a class of expressive architectures that are extremely effective for sequence modeling. However, the key limitation of transformers is their quadratic memory and time complexity $\mathcal{O}(L^2)$ with respect to the sequence length in attention layers, which restricts application in extremely long sequences. Most existing approaches leverage sparsity or low-rank assumptions in the attention matrix to reduce cost, but sacrifice expressiveness. Instead, we propose Combiner, which provides full attention capability in each attention head while maintaining low computation and memory complexity. The key idea is to treat the self-attention mechanism as a conditional expectation over embeddings at each location, and approximate the conditional distribution with a structured factorization. Each location can attend to all other locations, either via direct attention, or through indirect attention to abstractions, which are again conditional expectations of embeddings from corresponding local regions. We show that most sparse attention patterns used in existing sparse transformers are able to inspire the design of such factorization for full attention, resulting in the same sub-quadratic cost ($\mathcal{O}(L\log(L))$ or $\mathcal{O}(L\sqrt{L})$). Combiner is a drop-in replacement for attention layers in existing transformers and can be easily implemented in common frameworks. An experimental evaluation on both autoregressive and bidirectional sequence tasks demonstrates the effectiveness of this approach, yielding state-of-the-art results on several image and text modeling tasks.

Hongyu Ren, Hanjun Dai, Zihang Dai, Mengjiao Yang, Jure Leskovec, Dale Schuurmans, Bo Dai• 2021

Related benchmarks

TaskDatasetResultRank
Long-range sequence modelingLong Range Arena (LRA) (test)
Accuracy (Avg)60.23
158
Long sequence classificationLRA (Long Range Arena) (test)
Average Accuracy58.34
92
Density EstimationImageNet 64x64 (test)
Bits Per Sub-Pixel3.42
62
Density EstimationCIFAR-10
bpd2.77
40
Language ModelingWiki-40B (val)
Perplexity16.119
35
Masked Language ModelingC4 (val)
PPLX4.195
35
Language ModelingPG-19 500M parameters scale (test)
PPLX43.99
20
Density EstimationImageNet 64
Bits-per-dimension3.42
16
Density EstimationImageNet 64x64 (val)
Bits/dim3.42
13
Image GenerationImageNet64 downsampled (test)
BPD3.42
8
Showing 10 of 12 rows

Other info

Code

Follow for update