Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Fixup Initialization: Residual Learning Without Normalization

About

Normalization layers are a staple in state-of-the-art deep neural network architectures. They are widely believed to stabilize training, enable higher learning rate, accelerate convergence and improve generalization, though the reason for their effectiveness is still an active research topic. In this work, we challenge the commonly-held beliefs by showing that none of the perceived benefits is unique to normalization. Specifically, we propose fixed-update initialization (Fixup), an initialization motivated by solving the exploding and vanishing gradient problem at the beginning of training via properly rescaling a standard initialization. We find training residual networks with Fixup to be as stable as training with normalization -- even for networks with 10,000 layers. Furthermore, with proper regularization, Fixup enables residual networks without normalization to achieve state-of-the-art performance in image classification and machine translation.

Hongyi Zhang, Yann N. Dauphin, Tengyu Ma• 2019

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-10 (test)--
3381
Image ClassificationImageNet (test)
Top-1 Acc75.7
235
ClassificationSVHN (test)
Error Rate1.4
182
Machine TranslationIWSLT De-En 2014 (test)
BLEU34.5
146
Machine TranslationIWSLT German-to-English '14 (test)
BLEU Score35.59
110
Machine TranslationIWSLT En-De 2014 (test)
BLEU34.5
92
Machine TranslationWMT EN-DE 2017 (test)
BLEU Score0.284
46
Machine TranslationWMT en-de
BLEU29.3
10
Machine TranslationIWSLT DE-EN
BLEU Score34.5
3
Showing 9 of 9 rows

Other info

Follow for update