Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Pay Attention when Required

About

Transformer-based models consist of interleaved feed-forward blocks - that capture content meaning, and relatively more expensive self-attention blocks - that capture context meaning. In this paper, we explored trade-offs and ordering of the blocks to improve upon the current Transformer architecture and proposed PAR Transformer. It needs 35% lower compute time than Transformer-XL achieved by replacing ~63% of the self-attention blocks with feed-forward blocks, and retains the perplexity on WikiText-103 language modelling benchmark. We further validated our results on text8 and enwiki8 datasets, as well as on the BERT model.

Swetha Mandava, Szymon Migacz, Alex Fit Florea• 2020

Related benchmarks

TaskDatasetResultRank
Natural Language UnderstandingGLUE (dev)
SST-2 (Acc)91.6
504
Question AnsweringSQuAD v1.1 (dev)
F1 Score87.4
375
Language ModelingWikiText-103
PPL18.4
146
Language ModelEnwiki8
BPC1.11
23
Character-level Language Modelingtext8
BPC1.18
16
Showing 5 of 5 rows

Other info

Follow for update