Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

AdaSAM: Boosting Sharpness-Aware Minimization with Adaptive Learning Rate and Momentum for Training Deep Neural Networks

About

Sharpness aware minimization (SAM) optimizer has been extensively explored as it can generalize better for training deep neural networks via introducing extra perturbation steps to flatten the landscape of deep learning models. Integrating SAM with adaptive learning rate and momentum acceleration, dubbed AdaSAM, has already been explored empirically to train large-scale deep neural networks without theoretical guarantee due to the triple difficulties in analyzing the coupled perturbation step, adaptive learning rate and momentum step. In this paper, we try to analyze the convergence rate of AdaSAM in the stochastic non-convex setting. We theoretically show that AdaSAM admits a $\mathcal{O}(1/\sqrt{bT})$ convergence rate, which achieves linear speedup property with respect to mini-batch size $b$. Specifically, to decouple the stochastic gradient steps with the adaptive learning rate and perturbed gradient, we introduce the delayed second-order momentum term to decompose them to make them independent while taking an expectation during the analysis. Then we bound them by showing the adaptive learning rate has a limited range, which makes our analysis feasible. To the best of our knowledge, we are the first to provide the non-trivial convergence rate of SAM with an adaptive learning rate and momentum acceleration. At last, we conduct several experiments on several NLP tasks, which show that AdaSAM could achieve superior performance compared with SGD, AMSGrad, and SAM optimizers.

Hao Sun, Li Shen, Qihuang Zhong, Liang Ding, Shixiang Chen, Jingwei Sun, Jing Li, Guangzhong Sun, Dacheng Tao• 2023

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy70.17
1460
Question AnsweringOpenBookQA
Accuracy36.2
465
Natural Language InferenceRTE
Accuracy72.56
367
Boolean Question AnsweringBoolQ
Accuracy79.2
307
Science Question AnsweringARC Challenge
Accuracy44.28
234
Natural Language UnderstandingGLUE (test dev)
MRPC Accuracy92.5
81
Multiple-choice Question AnsweringMMLU
STEM Accuracy50.49
13
Linguistic AcceptabilityCOLA
Max Memory (MB)3.30e+3
5
Natural Language InferenceMNLI
Max Memory (MB)8.08e+3
5
Fine-tuningOpen-Platypus
Max Memory (MB)5.13e+4
4
Showing 10 of 10 rows

Other info

Follow for update