Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

When Do Flat Minima Optimizers Work?

About

Recently, flat-minima optimizers, which seek to find parameters in low-loss neighborhoods, have been shown to improve a neural network's generalization performance over stochastic and adaptive gradient-based optimizers. Two methods have received significant attention due to their scalability: 1. Stochastic Weight Averaging (SWA), and 2. Sharpness-Aware Minimization (SAM). However, there has been limited investigation into their properties and no systematic benchmarking of them across different domains. We fill this gap here by comparing the loss surfaces of the models trained with each method and through broad benchmarking across computer vision, natural language processing, and graph representation learning tasks. We discover several surprising findings from these results, which we hope will help researchers further improve deep learning optimizers, and practitioners identify the right optimizer for their problem.

Jean Kaddour, Linqing Liu, Ricardo Silva, Matt J. Kusner• 2022

Related benchmarks

TaskDatasetResultRank
Question AnsweringSQuAD 2.0
F181.21
190
Question AnsweringSQuAD v2.0 (dev)
F180.31
158
Abstractive SummarizationSamSum
ROUGE-227.61
73
General Language UnderstandingGLUE v1 (test dev)
MNLI87.44
40
SummarizationSamSum (test)
ROUGE-152.21
18
Dialogue GenerationE2E
BLEU63.5
10
Multiple-choice Question AnsweringKorMedMCQA (test)
Accuracy (Doctor)43.96
7
SummarizationSummarization
Grade71.92
6
Showing 8 of 8 rows

Other info

Follow for update