Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Gradual Learning of Recurrent Neural Networks

About

Recurrent Neural Networks (RNNs) achieve state-of-the-art results in many sequence-to-sequence modeling tasks. However, RNNs are difficult to train and tend to suffer from overfitting. Motivated by the Data Processing Inequality (DPI), we formulate the multi-layered network as a Markov chain, introducing a training method that comprises training the network gradually and using layer-wise gradient clipping. We found that applying our methods, combined with previously introduced regularization and optimization methods, resulted in improvements in state-of-the-art architectures operating in language modeling tasks.

Ziv Aharoni, Gal Rattner, Haim Permuter• 2017

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-2 (test)
PPL40.46
1541
Language ModelingPenn Treebank (test)
Perplexity46.34
411
Language ModelingWikiText2 (val)
Perplexity (PPL)42.19
277
Language ModelingPenn Treebank (val)
Perplexity46.64
178
Showing 4 of 4 rows

Other info

Code

Follow for update