Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Multi-branch Attentive Transformer

About

While the multi-branch architecture is one of the key ingredients to the success of computer vision tasks, it has not been well investigated in natural language processing, especially sequence learning tasks. In this work, we propose a simple yet effective variant of Transformer called multi-branch attentive Transformer (briefly, MAT), where the attention layer is the average of multiple branches and each branch is an independent multi-head attention layer. We leverage two training techniques to regularize the training: drop-branch, which randomly drops individual branches during training, and proximal initialization, which uses a pre-trained Transformer model to initialize multiple branches. Experiments on machine translation, code generation and natural language understanding demonstrate that such a simple variant of Transformer brings significant improvements. Our code is available at \url{https://github.com/HA-Transformer}.

Yang Fan, Shufang Xie, Yingce Xia, Lijun Wu, Tao Qin, Xiang-Yang Li, Tie-Yan Liu• 2020

Related benchmarks

TaskDatasetResultRank
Machine TranslationWMT En-De 2014 (test)
BLEU30.8
379
Natural Language UnderstandingGLUE (val)
SST-297
170
Machine TranslationIWSLT De-En 2014 (test)
BLEU36.22
146
Machine TranslationIWSLT German-to-English '14 (test)
BLEU Score36.2
110
Machine TranslationIWSLT En-De 2014 (test)
BLEU29.9
92
Machine TranslationWMT En-De '14
BLEU29.9
89
Machine TranslationWMT En-De 2019 (test)
SacreBLEU40.4
37
Machine TranslationIWSLT De-En 14
BLEU Score36.22
33
Code GenerationJava dataset (test)
BLEU27.53
6
Code GenerationPython dataset (test)
BLEU16.66
6
Showing 10 of 11 rows

Other info

Code

Follow for update