Partially Shuffling the Training Data to Improve Language Models
About
Although SGD requires shuffling the training data between epochs, currently none of the word-level language modeling systems do this. Naively shuffling all sentences in the training data would not permit the model to learn inter-sentence dependencies. Here we present a method that partially shuffles the training data between epochs. This method makes each batch random, while keeping most sentence ordering intact. It achieves new state of the art results on word-level language modeling on both the Penn Treebank and WikiText-2 datasets.
Ofir Press• 2019
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Language Modeling | WikiText-2 (test) | PPL39.03 | 1541 | |
| Language Modeling | Penn Treebank (test) | Perplexity52 | 411 | |
| Language Modeling | WikiText2 (val) | Perplexity (PPL)40.75 | 277 | |
| Language Modeling | Penn Treebank (val) | Perplexity53.79 | 178 | |
| Language Modeling | Penn Treebank (PTB) (test) | Perplexity47.49 | 120 | |
| Language Modeling | Penn Treebank (PTB) (val) | Perplexity47.93 | 70 |
Showing 6 of 6 rows