Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Improving Neural Language Modeling via Adversarial Training

About

Recently, substantial progress has been made in language modeling by using deep neural networks. However, in practice, large scale neural language models have been shown to be prone to overfitting. In this paper, we present a simple yet highly effective adversarial training mechanism for regularizing neural language models. The idea is to introduce adversarial noise to the output embedding layer while training the models. We show that the optimal adversarial noise yields a simple closed-form solution, thus allowing us to develop a simple and time efficient algorithm. Theoretically, we show that our adversarial mechanism effectively encourages the diversity of the embedding vectors, helping to increase the robustness of models. Empirically, we show that our method improves on the single model state-of-the-art results for language modeling on Penn Treebank (PTB) and Wikitext-2, achieving test perplexity scores of 46.01 and 38.07, respectively. When applied to machine translation, our method improves over various transformer-based translation baselines in BLEU scores on the WMT14 English-German and IWSLT14 German-English tasks.

Dilin Wang, Chengyue Gong, Qiang Liu• 2019

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-2 (test)
PPL38.07
1541
Language ModelingWikiText-103 (test)
Perplexity28
524
Machine TranslationWMT En-De 2014 (test)
BLEU29.52
379
Language ModelingWikiText2 (val)
Perplexity (PPL)39.58
277
Language ModelingWikiText-103 (val)
PPL27.2
180
Machine TranslationIWSLT De-En 2014 (test)
BLEU35.18
146
Machine TranslationWMT English-German 2014 (test)
BLEU28.4
136
Language ModelingPenn Treebank (PTB) (test)
Perplexity46.01
120
Machine TranslationIWSLT German-to-English '14 (test)
BLEU Score35.2
110
Language ModelingPenn Treebank (PTB) (val)
Perplexity46.63
70
Showing 10 of 11 rows

Other info

Code

Follow for update