Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Noisy Recurrent Neural Networks

About

We provide a general framework for studying recurrent neural networks (RNNs) trained by injecting noise into hidden states. Specifically, we consider RNNs that can be viewed as discretizations of stochastic differential equations driven by input data. This framework allows us to study the implicit regularization effect of general noise injection schemes by deriving an approximate explicit regularizer in the small noise regime. We find that, under reasonable assumptions, this implicit regularization promotes flatter minima; it biases towards models with more stable dynamics; and, in classification tasks, it favors models with larger classification margin. Sufficient conditions for global stability are obtained, highlighting the phenomenon of stochastic stabilization, where noise injection can improve stability during training. Our theory is supported by empirical results which demonstrate that the RNNs have improved robustness with respect to various input perturbations.

Soon Hoe Lim, N. Benjamin Erichson, Liam Hodgkinson, Michael W. Mahoney• 2021

Related benchmarks

TaskDatasetResultRank
Ordered Pixel-by-Pixel ClassificationMNIST ordered pixels (test)
Accuracy99.1
42
Sequential Image ClassificationMNIST ordered pixel-by-pixel 1.0 (test)
Accuracy98.8
32
Permuted Pixel-by-Pixel MNIST ClassificationPermuted MNIST (pMNIST) pixel-by-pixel (test)
Accuracy (Clean)94.9
25
ClassificationECG (test)
Accuracy (Clean)97.7
5
Showing 4 of 4 rows

Other info

Code

Follow for update