Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Adaptive Attention Span in Transformers

About

We propose a novel self-attention mechanism that can learn its optimal attention span. This allows us to extend significantly the maximum context size used in Transformer, while maintaining control over their memory footprint and computational time. We show the effectiveness of our approach on the task of character level language modeling, where we achieve state-of-the-art performances on text8 and enwiki8 by using a maximum context of 8k characters.

Sainbayar Sukhbaatar, Edouard Grave, Piotr Bojanowski, Armand Joulin• 2019

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-103 (test)
Perplexity20.6
524
Character-level Language Modelingenwik8 (test)
BPC0.9752
195
Character-level Language Modelingtext8 (test)
BPC1.07
128
Character-level Language ModelingEnwik8 (val)
BPC1.04
15
Character-level Language Modelingtext8 (dev)
BPC1.01
13
Character-level Language Modelingenwik8 (dev)
BPC1
10
Object CollisionObject Collision (test)
Test Error0.598
6
Showing 7 of 7 rows

Other info

Code

Follow for update