Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Improving Robustness with Adaptive Weight Decay

About

We propose adaptive weight decay, which automatically tunes the hyper-parameter for weight decay during each training iteration. For classification problems, we propose changing the value of the weight decay hyper-parameter on the fly based on the strength of updates from the classification loss (i.e., gradient of cross-entropy), and the regularization loss (i.e., $\ell_2$-norm of the weights). We show that this simple modification can result in large improvements in adversarial robustness -- an area which suffers from robust overfitting -- without requiring extra data across various datasets and architecture choices. For example, our reformulation results in $20\%$ relative robustness improvement for CIFAR-100, and $10\%$ relative robustness improvement on CIFAR-10 comparing to the best tuned hyper-parameters of traditional weight decay resulting in models that have comparable performance to SOTA robustness methods. In addition, this method has other desirable properties, such as less sensitivity to learning rate, and smaller weight norms, which the latter contributes to robustness to overfitting to label noise, and pruning.

Amin Ghiasi, Ali Shafahi, Reza Ardekani• 2022

Related benchmarks

TaskDatasetResultRank
Adversarial RobustnessCIFAR-10 (test)--
76
Adversarial RobustnessCIFAR-100 (test)
Natural Acc64.49
46
Language ModelingOpenWebText GPT-2 (test)
Perplexity18.42
13
Image ClassificationCIFAR-10
L2 Norm Weight7.11
2
Image ClassificationCIFAR-100
L2 Norm (||W||2)13.41
2
Image ClassificationTiny-ImageNet
Weight L2 Norm15.01
2
Image ClassificationFashionMNIST
Weight L2 Norm9.05
2
Image ClassificationFlowers
L2 Weight Norm13.87
2
Image ClassificationSVHN
L2 Weight Norm5.39
2
Showing 9 of 9 rows

Other info

Follow for update