Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ReZero is All You Need: Fast Convergence at Large Depth

About

Deep networks often suffer from vanishing or exploding gradients due to inefficient signal propagation, leading to long training times or convergence difficulties. Various architecture designs, sophisticated residual-style networks, and initialization schemes have been shown to improve deep signal propagation. Recently, Pennington et al. used free probability theory to show that dynamical isometry plays an integral role in efficient deep learning. We show that the simplest architecture change of gating each residual connection using a single zero-initialized parameter satisfies initial dynamical isometry and outperforms more complex approaches. Although much simpler than its predecessors, this gate enables training thousands of fully connected layers with fast convergence and better test performance for ResNets trained on CIFAR-10. We apply this technique to language modeling and find that we can easily train 120-layer Transformers. When applied to 12 layer Transformers, it converges 56% faster on enwiki8.

Thomas Bachlechner, Bodhisattwa Prasad Majumder, Huanru Henry Mao, Garrison W. Cottrell, Julian McAuley• 2020

Related benchmarks

TaskDatasetResultRank
Machine TranslationIWSLT German-to-English '14 (test)
BLEU Score34.55
110
Machine TranslationWMT EN-DE 2017 (test)
BLEU Score0.269
46
Showing 2 of 2 rows

Other info

Follow for update