Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

On the Spectral Bias of Neural Networks

About

Neural networks are known to be a class of highly expressive functions able to fit even random input-output mappings with $100\%$ accuracy. In this work, we present properties of neural networks that complement this aspect of expressivity. By using tools from Fourier analysis, we show that deep ReLU networks are biased towards low frequency functions, meaning that they cannot have local fluctuations without affecting their global behavior. Intuitively, this property is in line with the observation that over-parameterized networks find simple patterns that generalize across data samples. We also investigate how the shape of the data manifold affects expressivity by showing evidence that learning high frequencies gets \emph{easier} with increasing manifold complexity, and present a theoretical understanding of this behavior. Finally, we study the robustness of the frequency components with respect to parameter perturbation, to develop the intuition that the parameters must be finely tuned to express high frequency functions.

Nasim Rahaman, Aristide Baratin, Devansh Arpit, Felix Draxler, Min Lin, Fred A. Hamprecht, Yoshua Bengio, Aaron Courville• 2018

Related benchmarks

TaskDatasetResultRank
Function Approximation2D smooth
RMSE8.131
5
Function Approximation2D r^-1
RMSE1.731
4
Function Approximation2D multi-power
RMSE1.641
4
Function Approximation3D Coulomb
RMSE2.361
4
Function Approximation2D log r
RMSE7.331
4
Function Approximation2D r^1/2
RMSE5.881
4
Function Approximation2D 2-source
RMSE1.461
2
Function Approximation2D crack-tip
RMSE4.351
2
Function Approximation2D 3-source
RMSE2.201
2
Showing 9 of 9 rows

Other info

Follow for update