Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Cautious Optimizers: Improving Training with One Line of Code

About

AdamW has been the default optimizer for transformer pretraining. For many years, our community searched for faster and more stable optimizers with only constrained positive outcomes. In this work, we propose a \textbf{one-line modification in Pytorch} to any momentum-based optimizer, which we rename cautious optimizer, e.g. C-AdamW and C-Lion. Our theoretical result shows that this modification preserves Adam's Hamiltonian function and it does not break the convergence guarantee under the Lyapunov analysis. In addition, a whole new family of optimizers is revealed by our theoretical insight. Among them, we pick the simplest one for empirical experiments, showing not only consistent speed-up on LLM pretraining, but also image classification, with minimum extra tuning on hyperparameters. Code is available at https://github.com/kyleliang919/C-Optim.

Kaizhao Liang, Lizhang Chen, Bo Liu, Qiang Liu• 2024

Related benchmarks

TaskDatasetResultRank
Question AnsweringARC Easy
Accuracy60.9
386
Physical Commonsense ReasoningPIQA
Accuracy67.68
329
Common Sense ReasoningHellaSwag
Accuracy41.93
164
Multi-task Language UnderstandingMMLU
Accuracy25.35
87
Language ModelingLambada OpenAI
Accuracy32.29
61
Question AnsweringARC Challenge
Normalized Accuracy29.78
48
Language Model Pre-trainingC4 Llama 2 pre-training (val)
Perplexity15.92
47
Image ClassificationMini-ImageNet
Top-1 Acc74.91
6
Showing 8 of 8 rows

Other info

Follow for update