Weighted Transformer Network for Machine Translation
About
State-of-the-art results on neural machine translation often use attentional sequence-to-sequence models with some form of convolution or recursion. Vaswani et al. (2017) propose a new architecture that avoids recurrence and convolution completely. Instead, it uses only self-attention and feed-forward layers. While the proposed architecture achieves state-of-the-art results on several machine translation tasks, it requires a large number of parameters and training iterations to converge. We propose Weighted Transformer, a Transformer with modified attention layers, that not only outperforms the baseline network in BLEU score but also converges 15-40% faster. Specifically, we replace the multi-head attention by multiple self-attention branches that the model learns to combine during the training process. Our model improves the state-of-the-art performance by 0.5 BLEU points on the WMT 2014 English-to-German translation task and by 0.4 on the English-to-French translation task.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Machine Translation | WMT En-De 2014 (test) | BLEU28.9 | 379 | |
| Machine Translation | WMT En-Fr 2014 (test) | BLEU41.4 | 237 | |
| Machine Translation | WMT English-German 2014 (test) | BLEU28.9 | 136 | |
| Machine Translation | WMT En-De '14 | BLEU28.9 | 89 | |
| Machine Translation | WMT14 En-De newstest2014 (test) | BLEU28.9 | 65 | |
| Machine Translation | WMT en-fr 14 | BLEU Score41.4 | 56 | |
| Machine Translation | WMT En-Fr newstest 2014 (test) | BLEU41.4 | 46 | |
| Machine Translation | WMT14 English-French (newstest2014) | BLEU41.4 | 39 | |
| English-German Machine Translation | WMT (newstest2014) | BLEU28.9 | 19 | |
| Machine Translation | WMT English-German 2014 (newstest) | BLEU (tok)28.9 | 10 |