Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A Theoretically Grounded Application of Dropout in Recurrent Neural Networks

About

Recurrent neural networks (RNNs) stand at the forefront of many recent developments in deep learning. Yet a major difficulty with these models is their tendency to overfit, with dropout shown to fail when applied to recurrent layers. Recent results at the intersection of Bayesian modelling and deep learning offer a Bayesian interpretation of common deep learning techniques such as dropout. This grounding of dropout in approximate Bayesian inference suggests an extension of the theoretical results, offering insights into the use of dropout with RNN models. We apply this new variational inference based dropout technique in LSTM and GRU models, assessing it on language modelling and sentiment analysis tasks. The new approach outperforms existing techniques, and to the best of our knowledge improves on the single model state-of-the-art in language modelling with the Penn Treebank (73.4 test perplexity). This extends our arsenal of variational tools in deep learning.

Yarin Gal, Zoubin Ghahramani• 2015

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-2 (test)
PPL87.7
1541
Language ModelingPTB (test)
Perplexity73.4
471
Language ModelingPenn Treebank (test)
Perplexity68.7
411
Language ModelingWikiText2 (val)
Perplexity (PPL)92.3
277
Language ModelingPenn Treebank (val)
Perplexity77.3
178
Language ModelingPenn Treebank (PTB) (test)
Perplexity73.4
120
Language ModelingPTB (val)
Perplexity77.9
83
Language ModelingPenn Treebank word-level (test)
Perplexity73.4
72
Text GenerationIMDB reviews (test)
Grammaticality20
10
Text GenerationPTB (test)
Grammaticality0.327
10
Showing 10 of 12 rows

Other info

Follow for update