Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Sparse Sinkhorn Attention

About

We propose Sparse Sinkhorn Attention, a new efficient and sparse method for learning to attend. Our method is based on differentiable sorting of internal representations. Concretely, we introduce a meta sorting network that learns to generate latent permutations over sequences. Given sorted sequences, we are then able to compute quasi-global attention with only local windows, improving the memory efficiency of the attention module. To this end, we propose new algorithmic innovations such as Causal Sinkhorn Balancing and SortCut, a dynamic sequence truncation method for tailoring Sinkhorn Attention for encoding and/or decoding purposes. Via extensive experiments on algorithmic seq2seq sorting, language modeling, pixel-wise image generation, document classification and natural language inference, we demonstrate that our memory efficient Sinkhorn Attention method is competitive with vanilla attention and consistently outperforms recently proposed efficient Transformer models such as Sparse Transformers.

Yi Tay, Dara Bahri, Liu Yang, Donald Metzler, Da-Cheng Juan• 2020

Related benchmarks

TaskDatasetResultRank
Long-range sequence modelingLong Range Arena (LRA)
Text Accuracy61.2
164
Long-range sequence modelingLong Range Arena (LRA) (test)
Accuracy (Avg)46.8
158
Efficiency AnalysisLong Range Arena (LRA)
Steps per second92.72
84
Long-sequence modelingLong Range Arena (LRA) v1 (test)
ListOps33.67
66
Hierarchical ReasoningListOps Long Range Arena (test)
Accuracy33.67
26
Sequence ModelingLong Range Arena (val)
ListOps Accuracy33.67
26
Hierarchical reasoning on symbolic sequencesLong ListOps (test)
Accuracy33.67
22
Sequence ClassificationcharIMDB
Accuracy63.6
13
Sequence ClassificationListOps
Accuracy (%)17.1
13
Showing 9 of 9 rows

Other info

Follow for update