Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

On the Variance of the Adaptive Learning Rate and Beyond

About

The learning rate warmup heuristic achieves remarkable success in stabilizing training, accelerating convergence and improving generalization for adaptive stochastic optimization algorithms like RMSprop and Adam. Here, we study its mechanism in details. Pursuing the theory behind warmup, we identify a problem of the adaptive learning rate (i.e., it has problematically large variance in the early stage), suggest warmup works as a variance reduction technique, and provide both empirical and theoretical evidence to verify our hypothesis. We further propose RAdam, a new variant of Adam, by introducing a term to rectify the variance of the adaptive learning rate. Extensive experimental results on image classification, language modeling, and neural machine translation verify our intuition and demonstrate the effectiveness and robustness of our proposed method. All implementations are available at: https://github.com/LiyuanLucasLiu/RAdam.

Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, Jiawei Han• 2019

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-10 (test)
Accuracy95.18
3381
Image GenerationCIFAR-10 (test)
FID13
471
Machine TranslationWMT En-De 2014 (test)
BLEU27.29
379
Question AnsweringSQuAD v1.1 (test)
F1 Score88.38
260
Image ClassificationImageNet (test)--
235
Machine TranslationIWSLT De-En 2014 (test)
BLEU34.97
146
Machine TranslationWMT En-De '14
BLEU27.29
89
Machine TranslationIWSLT14 DE-EN
BLEU Score34.97
22
Machine TranslationIWSLT DE-EN '14 (train)
Training Perplexity3.36
12
Machine TranslationIWSLT'14 DE-EN (val)
Validation PPL4.92
12
Showing 10 of 19 rows

Other info

Follow for update