Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Synthesizer: Rethinking Self-Attention in Transformer Models

About

The dot product self-attention is known to be central and indispensable to state-of-the-art Transformer models. But is it really required? This paper investigates the true importance and contribution of the dot product-based self-attention mechanism on the performance of Transformer models. Via extensive experiments, we find that (1) random alignment matrices surprisingly perform quite competitively and (2) learning attention weights from token-token (query-key) interactions is useful but not that important after all. To this end, we propose \textsc{Synthesizer}, a model that learns synthetic attention weights without token-token interactions. In our experiments, we first show that simple Synthesizers achieve highly competitive performance when compared against vanilla Transformer models across a range of tasks, including machine translation, language modeling, text generation and GLUE/SuperGLUE benchmarks. When composed with dot product attention, we find that Synthesizers consistently outperform Transformers. Moreover, we conduct additional comparisons of Synthesizers against Dynamic Convolutions, showing that simple Random Synthesizer is not only $60\%$ faster but also improves perplexity by a relative $3.5\%$. Finally, we show that simple factorized Synthesizers can outperform Linformers on encoding only tasks.

Yi Tay, Dara Bahri, Donald Metzler, Da-Cheng Juan, Zhe Zhao, Che Zheng• 2020

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-103 (test)
Perplexity32.43
524
Machine TranslationWMT En-De 2014 (test)
BLEU28.47
379
Character-level Language Modelingenwik8 (test)
BPC1.298
195
Language ModelingWikiText-103 (val)
PPL31.31
180
Long-range sequence modelingLong Range Arena (LRA)
Text Accuracy61.68
164
Long-range sequence modelingLong Range Arena (LRA) (test)
Accuracy (Avg)51.1
158
Text ClassificationAGNews
Accuracy89.1
119
Text ClassificationIMDB
Accuracy84.6
107
Long sequence classificationLRA (Long Range Arena) (test)
Average Accuracy52.88
92
Efficiency AnalysisLong Range Arena (LRA)
Steps per second65.44
84
Showing 10 of 20 rows

Other info

Follow for update