Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Transformers without Tears: Improving the Normalization of Self-Attention

About

We evaluate three simple, normalization-centric changes to improve Transformer training. First, we show that pre-norm residual connections (PreNorm) and smaller initializations enable warmup-free, validation-based training with large learning rates. Second, we propose $\ell_2$ normalization with a single scale parameter (ScaleNorm) for faster training and better performance. Finally, we reaffirm the effectiveness of normalizing word embeddings to a fixed length (FixNorm). On five low-resource translation pairs from TED Talks-based corpora, these changes always converge, giving an average +1.1 BLEU over state-of-the-art bilingual baselines and a new 32.8 BLEU on IWSLT'15 English-Vietnamese. We observe sharper performance curves, more consistent gradient norms, and a linear relationship between activation scaling and decoder depth. Surprisingly, in the high-resource setting (WMT'14 English-German), ScaleNorm and FixNorm remain competitive but PreNorm degrades performance.

Toan Q. Nguyen, Julian Salazar• 2019

Related benchmarks

TaskDatasetResultRank
Machine TranslationWMT English-German 2014 (test)
BLEU27.57
136
Machine Translationsk-en (test)
BLEU30.25
15
Machine Translationgl-en (test)
BLEU20.91
5
Machine Translationen-vi (test)
BLEU Score32.79
5
Machine Translationen-he (test)
BLEU28.44
5
Machine Translationar-en (test)
BLEU Score34.35
5
Showing 6 of 6 rows

Other info

Code

Follow for update