Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

RealFormer: Transformer Likes Residual Attention

About

Transformer is the backbone of modern NLP models. In this paper, we propose RealFormer, a simple and generic technique to create Residual Attention Layer Transformer networks that significantly outperform the canonical Transformer and its variants (BERT, ETC, etc.) on a wide spectrum of tasks including Masked Language Modeling, GLUE, SQuAD, Neural Machine Translation, WikiHop, HotpotQA, Natural Questions, and OpenKP. We also observe empirically that RealFormer stabilizes training and leads to models with sparser attention. Source code and pre-trained checkpoints for RealFormer can be found at https://github.com/google-research/google-research/tree/master/realformer.

Ruining He, Anirudh Ravula, Bhargav Kanagal, Joshua Ainslie• 2020

Related benchmarks

TaskDatasetResultRank
Natural Language UnderstandingGLUE (dev)
SST-2 (Acc)94.04
504
Machine TranslationWMT En-De 2014 (test)
BLEU29.35
379
Question AnsweringSQuAD v1.1 (dev)
F1 Score91.93
375
Machine TranslationWMT En-Fr 2014 (test)
BLEU43.97
237
Question AnsweringSQuAD v2.0 (dev)
F182.93
158
Question AnsweringHotpotQA (dev)--
43
Machine TranslationWMT newstest 2015 (test)
BLEU30.36
31
Machine TranslationWMT newstest 2016 (test)
BLEU34.15
31
Machine TranslationWMT newstest 2010 (test)
BLEU24.32
21
Machine TranslationWMT news Average 2010-2016 (test)
Average BLEU26.95
17
Showing 10 of 18 rows

Other info

Code

Follow for update