Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Pushing the bounds of dropout

About

We show that dropout training is best understood as performing MAP estimation concurrently for a family of conditional models whose objectives are themselves lower bounded by the original dropout objective. This discovery allows us to pick any model from this family after training, which leads to a substantial improvement on regularisation-heavy language modelling. The family includes models that compute a power mean over the sampled dropout masks, and their less stochastic subvariants with tighter and higher lower bounds than the fully stochastic dropout objective. We argue that since the deterministic subvariant's bound is equal to its objective, and the highest amongst these models, the predominant view of it as a good approximation to MC averaging is misleading. Rather, deterministic dropout is the best available approximation to the true objective.

G\'abor Melis, Charles Blundell, Tom\'a\v{s} Ko\v{c}isk\'y, Karl Moritz Hermann, Chris Dyer, Phil Blunsom• 2018

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-2 (test)
PPL63.7
1541
Language ModelingPTB (test)
Perplexity55.3
471
Language ModelingWikiText2 (val)
Perplexity (PPL)66.9
277
Language ModelingPTB (val)
Perplexity57.1
83
Language ModelingPenn Treebank word-level (test)
Perplexity55.3
72
Showing 5 of 5 rows

Other info

Follow for update