Path-Normalized Optimization of Recurrent Neural Networks with ReLU Activations
About
We investigate the parameter-space geometry of recurrent neural networks (RNNs), and develop an adaptation of path-SGD optimization method, attuned to this geometry, that can learn plain RNNs with ReLU activations. On several datasets that require capturing long-term dependency structure, we show that path-SGD can significantly improve trainability of ReLU RNNs compared to RNNs trained with SGD, even with various recently suggested initialization schemes.
Behnam Neyshabur, Yuhuai Wu, Ruslan Salakhutdinov, Nathan Srebro• 2016
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Character-level Language Modeling | Penn Treebank (test) | BPC1.47 | 113 | |
| Sequential Image Classification | MNIST Sequential (test) | Accuracy96.9 | 47 | |
| Character-level Language Modeling | Penn Treebank char-level (test) | BPC1.47 | 25 | |
| Sequential Image Classification | MNIST (test) | Error Rate3.1 | 5 |
Showing 4 of 4 rows