Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

PermuteFormer: Efficient Relative Position Encoding for Long Sequences

About

A recent variation of Transformer, Performer, scales Transformer to longer sequences with a linear attention mechanism. However, it is not compatible with relative position encoding, which has advantages over absolute position encoding. In this paper, we discuss possible ways to add relative position encoding to Performer. Based on the analysis, we propose PermuteFormer, a Performer-based model with relative position encoding that scales linearly on long sequences. PermuteFormer applies position-dependent transformation on queries and keys to encode positional information into the attention module. This transformation is carefully crafted so that the final output of self-attention is not affected by absolute positions of tokens. PermuteFormer introduces negligible computational overhead by design that it runs as fast as Performer. We evaluate PermuteFormer on Long-Range Arena, a dataset for long sequences, as well as WikiText-103, a language modeling dataset. The experiments show that PermuteFormer uniformly improves the performance of Performer with almost no computational overhead and outperforms vanilla Transformer on most of the tasks.

Peng Chen• 2021

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-103 (test)
Perplexity24.13
524
Language ModelingWikiText-103 (val)
PPL23.71
180
Long-range sequence modelingLong Range Arena (LRA)
Text Accuracy66.62
164
Showing 3 of 3 rows

Other info

Follow for update