Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Pay Less Attention with Lightweight and Dynamic Convolutions

About

Self-attention is a useful mechanism to build generative models for language and images. It determines the importance of context elements by comparing each element to the current time step. In this paper, we show that a very lightweight convolution can perform competitively to the best reported self-attention results. Next, we introduce dynamic convolutions which are simpler and more efficient than self-attention. We predict separate convolution kernels based solely on the current time-step in order to determine the importance of context elements. The number of operations required by this approach scales linearly in the input length, whereas self-attention is quadratic. Experiments on large-scale machine translation, language modeling and abstractive summarization show that dynamic convolutions improve over strong self-attention models. On the WMT'14 English-German test set dynamic convolutions achieve a new state of the art of 29.7 BLEU.

Felix Wu, Angela Fan, Alexei Baevski, Yann N. Dauphin, Michael Auli• 2019

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-103 (test)
Perplexity25
524
Machine TranslationWMT En-De 2014 (test)
BLEU29.7
379
Machine TranslationWMT En-Fr 2014 (test)
BLEU43.2
237
Abstractive Text SummarizationCNN/Daily Mail (test)
ROUGE-L36.8
169
Machine TranslationIWSLT De-En 2014 (test)
BLEU35.2
146
Machine TranslationWMT English-German 2014 (test)
BLEU29.7
136
Machine TranslationIWSLT En-De 2014 (test)
BLEU35.2
92
Machine TranslationWMT En-De '14
BLEU29.7
89
Dialogue SummarizationSamSum (test)
ROUGE-220.7
80
Machine TranslationWMT14 En-De newstest2014 (test)
BLEU29.7
65
Showing 10 of 30 rows

Other info

Code

Follow for update