Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Efficient Long Sequence Modeling via State Space Augmented Transformer

About

Transformer models have achieved superior performance in various natural language processing tasks. However, the quadratic computational cost of the attention mechanism limits its practicality for long sequences. There are existing attention variants that improve the computational efficiency, but they have limited ability to effectively compute global information. In parallel to Transformer models, state space models (SSMs) are tailored for long sequences, but they are not flexible enough to capture complicated local information. We propose SPADE, short for $\underline{\textbf{S}}$tate s$\underline{\textbf{P}}$ace $\underline{\textbf{A}}$ugmente$\underline{\textbf{D}}$ Transform$\underline{\textbf{E}}$r. Specifically, we augment a SSM into the bottom layer of SPADE, and we employ efficient local attention methods for the other layers. The SSM augments global information, which complements the lack of long-range dependency issue in local attention methods. Experimental results on the Long Range Arena benchmark and language modeling tasks demonstrate the effectiveness of the proposed method. To further demonstrate the scalability of SPADE, we pre-train large encoder-decoder models and present fine-tuning results on natural language understanding and natural language generation tasks.

Simiao Zuo, Xiaodong Liu, Jian Jiao, Denis Charles, Eren Manavoglu, Tuo Zhao, Jianfeng Gao• 2022

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-103 (test)
Perplexity24.55
524
Language ModelingWikiText-103 (val)
PPL24.18
180
Long-range sequence modelingLong Range Arena (LRA)
Text Accuracy86.29
164
Long-range sequence modelingLong Range Arena (LRA) (test)
Accuracy (Avg)87.4
158
Showing 4 of 4 rows

Other info

Follow for update