Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Decoupled Weight Decay Regularization

About

L$_2$ regularization and weight decay regularization are equivalent for standard stochastic gradient descent (when rescaled by the learning rate), but as we demonstrate this is \emph{not} the case for adaptive gradient algorithms, such as Adam. While common implementations of these algorithms employ L$_2$ regularization (often calling it "weight decay" in what may be misleading due to the inequivalence we expose), we propose a simple modification to recover the original formulation of weight decay regularization by \emph{decoupling} the weight decay from the optimization steps taken w.r.t. the loss function. We provide empirical evidence that our proposed modification (i) decouples the optimal choice of weight decay factor from the setting of the learning rate for both standard SGD and Adam and (ii) substantially improves Adam's generalization performance, allowing it to compete with SGD with momentum on image classification datasets (on which it was previously typically outperformed by the latter). Our proposed decoupled weight decay has already been adopted by many researchers, and the community has implemented it in TensorFlow and PyTorch; the complete source code for our experiments is available at https://github.com/loshchil/AdamW-and-SGDW

Ilya Loshchilov, Frank Hutter• 2017

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)--
3518
Image ClassificationCIFAR-10 (test)
Accuracy88.9
3381
Commonsense ReasoningHellaSwag
Accuracy79.2
1460
Image ClassificationImageNet-1k (val)--
1453
Image ClassificationImageNet (val)--
1206
Code GenerationHumanEval--
850
Language UnderstandingMMLU
Accuracy49.8
756
Question AnsweringARC Challenge--
749
Image Super-resolutionManga109
PSNR25.09
656
Image ClassificationImageNet A
Top-1 Acc60.64
553
Showing 10 of 139 rows
...

Other info

Follow for update