Transformers without Tears: Improving the Normalization of Self-Attention
About
We evaluate three simple, normalization-centric changes to improve Transformer training. First, we show that pre-norm residual connections (PreNorm) and smaller initializations enable warmup-free, validation-based training with large learning rates. Second, we propose $\ell_2$ normalization with a single scale parameter (ScaleNorm) for faster training and better performance. Finally, we reaffirm the effectiveness of normalizing word embeddings to a fixed length (FixNorm). On five low-resource translation pairs from TED Talks-based corpora, these changes always converge, giving an average +1.1 BLEU over state-of-the-art bilingual baselines and a new 32.8 BLEU on IWSLT'15 English-Vietnamese. We observe sharper performance curves, more consistent gradient norms, and a linear relationship between activation scaling and decoder depth. Surprisingly, in the high-resource setting (WMT'14 English-German), ScaleNorm and FixNorm remain competitive but PreNorm degrades performance.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Machine Translation | WMT English-German 2014 (test) | BLEU27.57 | 136 | |
| Machine Translation | sk-en (test) | BLEU30.25 | 15 | |
| Machine Translation | gl-en (test) | BLEU20.91 | 5 | |
| Machine Translation | en-vi (test) | BLEU Score32.79 | 5 | |
| Machine Translation | en-he (test) | BLEU28.44 | 5 | |
| Machine Translation | ar-en (test) | BLEU Score34.35 | 5 |