Gradual Learning of Recurrent Neural Networks
About
Recurrent Neural Networks (RNNs) achieve state-of-the-art results in many sequence-to-sequence modeling tasks. However, RNNs are difficult to train and tend to suffer from overfitting. Motivated by the Data Processing Inequality (DPI), we formulate the multi-layered network as a Markov chain, introducing a training method that comprises training the network gradually and using layer-wise gradient clipping. We found that applying our methods, combined with previously introduced regularization and optimization methods, resulted in improvements in state-of-the-art architectures operating in language modeling tasks.
Ziv Aharoni, Gal Rattner, Haim Permuter• 2017
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Language Modeling | WikiText-2 (test) | PPL40.46 | 1541 | |
| Language Modeling | Penn Treebank (test) | Perplexity46.34 | 411 | |
| Language Modeling | WikiText2 (val) | Perplexity (PPL)42.19 | 277 | |
| Language Modeling | Penn Treebank (val) | Perplexity46.64 | 178 |
Showing 4 of 4 rows