Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SGD as Free Energy Minimization: A Thermodynamic View on Neural Network Training

About

We present a thermodynamic interpretation of the stationary behavior of stochastic gradient descent (SGD) under fixed learning rates (LRs) in neural network training. We show that SGD implicitly minimizes a free energy function $F=U-TS$, balancing training loss $U$ and the entropy of the weights distribution $S$, with temperature $T$ determined by the LR. This perspective offers a new lens on why high LRs prevent training from converging to the loss minima and how different LRs lead to stabilization at different loss levels. We empirically validate the free energy framework on both underparameterized (UP) and overparameterized (OP) models. UP models consistently follow free energy minimization, with temperature increasing monotonically with LR, while for OP models, the temperature effectively drops to zero at low LRs, causing SGD to minimize the loss directly and converge to an optimum. We attribute this mismatch to differences in the signal-to-noise ratio of stochastic gradients near optima, supported by both a toy example and neural network experiments.

Ildus Sadrtdinov, Ivan Klimov, Ekaterina Lobacheva, Dmitry Vetrov• 2025

Related benchmarks

TaskDatasetResultRank
Image ClassificationMNIST (train)
Training Loss0.789
38
RegressionBurgers' dataset (train)
MSE0.0012
18
RegressionBurgers' dataset (test)
MSE0.0026
18
Image ClassificationMNIST (test)
Test Cross-Entropy0.697
18
Showing 4 of 4 rows

Other info

Follow for update