Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Mega: Moving Average Equipped Gated Attention

About

The design choices in the Transformer attention mechanism, including weak inductive bias and quadratic computational complexity, have limited its application for modeling long sequences. In this paper, we introduce Mega, a simple, theoretically grounded, single-head gated attention mechanism equipped with (exponential) moving average to incorporate inductive bias of position-aware local dependencies into the position-agnostic attention mechanism. We further propose a variant of Mega that offers linear time and space complexity yet yields only minimal quality loss, by efficiently splitting the whole sequence into multiple chunks with fixed length. Extensive experiments on a wide range of sequence modeling benchmarks, including the Long Range Arena, neural machine translation, auto-regressive language modeling, and image and speech classification, show that Mega achieves significant improvements over other sequence models, including variants of Transformers and recent state space models.

Xuezhe Ma, Chunting Zhou, Xiang Kong, Junxian He, Liangke Gui, Graham Neubig, Jonathan May, Luke Zettlemoyer• 2022

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet 1k (test)
Top-1 Accuracy82.31
798
Language ModelingWikiText-103 (test)
Perplexity18.07
524
Character-level Language Modelingenwik8 (test)
BPC1.02
195
Image ClassificationImageNet (val)
Top-1 Accuracy82.3
188
Language ModelingWikiText-103 (val)
PPL17.17
180
Long-range sequence modelingLong Range Arena (LRA)
Text Accuracy90.43
164
Long-range sequence modelingLong Range Arena (LRA) (test)
Accuracy (Avg)88.21
158
Long-sequence modelingLong Range Arena (LRA) v1 (test)
ListOps63.14
66
Audio ClassificationSpeech Commands (test)
Accuracy96.92
43
Long-range sequence modelingLRA 92 (test)
ListOps Accuracy37.11
26
Showing 10 of 17 rows

Other info

Code

Follow for update