Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Gradient Regularized Natural Gradients

About

Gradient regularization (GR) has been shown to improve the generalizability of trained models. While Natural Gradient Descent has been shown to accelerate optimization in the initial phase of training, little attention has been paid to how the training dynamics of second-order optimizers can benefit from GR. In this work, we propose Gradient-Regularized Natural Gradients (GRNG), a family of scalable second-order optimizers that integrate explicit gradient regularization with natural gradient updates. Our framework provides two complementary algorithms: a frequentist variant that avoids explicit inversion of the Fisher Information Matrix (FIM) via structured approximations, and a Bayesian variant based on a Regularized-Kalman formulation that eliminates the need for FIM inversion entirely. We establish convergence guarantees for GRNG, showing that gradient regularization improves stability and enables convergence to global minima. Empirically, we demonstrate that GRNG consistently enhances both optimization speed and generalization compared to first-order methods (SGD, AdamW) and second-order baselines (K-FAC, Sophia), with strong results on vision and language benchmarks. Our findings highlight gradient regularization as a principled and practical tool to unlock the robustness of natural gradient methods for large-scale deep learning.

Satya Prakash Dash, Hossein Abdi, Wei Pan, Samuel Kaski, Mingfei Sun• 2026

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-10 (test)
Accuracy97.1
3381
Image ClassificationImageNet-100 (test)
Clean Accuracy90.1
109
Image ClassificationFood-101 (test)--
89
Image ClassificationImageNet-100--
84
Image ClassificationOxford-IIIT Pet (test)
Overall Accuracy92.8
59
Natural Language UnderstandingGLUE (test)
MNLI-mm98.6
26
Image ClassificationCIFAR-100
Total Running Time (s)817
5
Image ClassificationFood-101
Total Runtime (s)4.73e+3
5
Natural Language InferenceMNLI mm
Total Latency (s)8.84e+3
5
Paraphrase DetectionQQP
Total Running Time (s)8.28e+3
5
Showing 10 of 11 rows

Other info

Follow for update