Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DoG is SGD's Best Friend: A Parameter-Free Dynamic Step Size Schedule

About

We propose a tuning-free dynamic SGD step size formula, which we call Distance over Gradients (DoG). The DoG step sizes depend on simple empirical quantities (distance from the initial point and norms of gradients) and have no ``learning rate'' parameter. Theoretically, we show that a slight variation of the DoG formula enjoys strong parameter-free convergence guarantees for stochastic convex optimization assuming only \emph{locally bounded} stochastic gradients. Empirically, we consider a broad range of vision and language transfer learning tasks, and show that DoG's performance is close to that of SGD with tuned learning rate. We also propose a per-layer variant of DoG that generally outperforms tuned SGD, approaching the performance of tuned Adam. A PyTorch implementation is available at https://github.com/formll/dog

Maor Ivgi, Oliver Hinder, Yair Carmon• 2023

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-10 (test)
Accuracy85.6
3381
Image ClassificationImageNet-100 (test)
Clean Accuracy76.4
109
Image ClassificationFood-101 (test)--
89
Language ModelingC4 LLaMA-130M (val)
Perplexity18.897
27
Image ClassificationCIFAR-10
Latency (ms/iter)19.98
13
Image ClassificationMNIST (test)
Accuracy99.5
12
Keypoint DetectionMS-COCO 2017 (test)
mAP50 (Box)80.7
6
Instance SegmentationMS-COCO 2017 (test)
Box mAP5055.1
6
Molecular property predictionOGBG
mAP23.1
6
MRI ReconstructionfastMRI
SSIM0.714
6
Showing 10 of 10 rows

Other info

Follow for update