Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Normalization Layers Are All That Sharpness-Aware Minimization Needs

About

Sharpness-aware minimization (SAM) was proposed to reduce sharpness of minima and has been shown to enhance generalization performance in various settings. In this work we show that perturbing only the affine normalization parameters (typically comprising 0.1% of the total parameters) in the adversarial step of SAM can outperform perturbing all of the parameters.This finding generalizes to different SAM variants and both ResNet (Batch Normalization) and Vision Transformer (Layer Normalization) architectures. We consider alternative sparse perturbation approaches and find that these do not achieve similar performance enhancement at such extreme sparsity levels, showing that this behaviour is unique to the normalization layers. Although our findings reaffirm the effectiveness of SAM in improving generalization performance, they cast doubt on whether this is solely caused by reduced sharpness.

Maximilian Mueller, Tiffany Vlaar, David Rolnick, Matthias Hein• 2023

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)--
3518
Image ClassificationImageNet (val)
Top-1 Acc78.7
1206
Image ClassificationCIFAR-10 (test)
Accuracy96.58
906
Image ClassificationImageNet 1k (test)
Top-1 Accuracy77.47
798
Showing 4 of 4 rows

Other info

Follow for update