Recurrent Neural Network Regularization
About
We present a simple regularization technique for Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) units. Dropout, the most successful technique for regularizing neural networks, does not work well with RNNs and LSTMs. In this paper, we show how to correctly apply dropout to LSTMs, and show that it substantially reduces overfitting on a variety of tasks. These tasks include language modeling, speech recognition, image caption generation, and machine translation.
Wojciech Zaremba, Ilya Sutskever, Oriol Vinyals• 2014
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Language Modeling | WikiText-2 (test) | PPL96.41 | 1541 | |
| Language Modeling | WikiText-103 (test) | Perplexity48.7 | 524 | |
| Language Modeling | PTB (test) | Perplexity78.4 | 471 | |
| Language Modeling | Penn Treebank (test) | Perplexity56 | 411 | |
| Language Modeling | Penn Treebank (val) | Perplexity71.9 | 178 | |
| Language Modeling | Penn Treebank (PTB) (test) | Perplexity79.34 | 120 | |
| Language Modeling | PTB (val) | Perplexity82.2 | 83 | |
| Language Modeling | Penn Treebank word-level (test) | Perplexity78.4 | 72 | |
| Machine Translation | WMT en-fr 14 | BLEU Score29.03 | 56 | |
| Image Classification | CIFAR-10 | Accuracy94.53 | 15 |
Showing 10 of 15 rows