Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Sharpness-Aware Minimization for Efficiently Improving Generalization

About

In today's heavily overparameterized models, the value of the training loss provides few guarantees on model generalization ability. Indeed, optimizing only the training loss value, as is commonly done, can easily lead to suboptimal model quality. Motivated by prior work connecting the geometry of the loss landscape and generalization, we introduce a novel, effective procedure for instead simultaneously minimizing loss value and loss sharpness. In particular, our procedure, Sharpness-Aware Minimization (SAM), seeks parameters that lie in neighborhoods having uniformly low loss; this formulation results in a min-max optimization problem on which gradient descent can be performed efficiently. We present empirical results showing that SAM improves model generalization across a variety of benchmark datasets (e.g., CIFAR-10, CIFAR-100, ImageNet, finetuning tasks) and models, yielding novel state-of-the-art performance for several. Additionally, we find that SAM natively provides robustness to label noise on par with that provided by state-of-the-art procedures that specifically target learning with noisy labels. We open source our code at \url{https://github.com/google-research/sam}.

Pierre Foret, Ariel Kleiner, Hossein Mobahi, Behnam Neyshabur• 2020

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)
Accuracy91.79
3518
Image ClassificationCIFAR-10 (test)
Accuracy98.87
3381
Image ClassificationImageNet-1k (val)--
1453
Image ClassificationCIFAR-10 (test)--
906
Image ClassificationImageNet 1k (test)
Top-1 Accuracy77.25
798
Natural Language InferenceSNLI (test)
Accuracy85.5
681
Image ClassificationCIFAR-100--
622
Image ClassificationCIFAR10 (test)
Accuracy97.85
585
Image ClassificationFood-101--
494
Image ClassificationImageNet
Top-1 Accuracy88.6
429
Showing 10 of 91 rows
...

Other info

Code

Follow for update