Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Subword Regularization: Improving Neural Network Translation Models with Multiple Subword Candidates

About

Subword units are an effective way to alleviate the open vocabulary problems in neural machine translation (NMT). While sentences are usually converted into unique subword sequences, subword segmentation is potentially ambiguous and multiple segmentations are possible even with the same vocabulary. The question addressed in this paper is whether it is possible to harness the segmentation ambiguity as a noise to improve the robustness of NMT. We present a simple regularization method, subword regularization, which trains the model with multiple subword segmentations probabilistically sampled during training. In addition, for better subword sampling, we propose a new subword segmentation algorithm based on a unigram language model. We experiment with multiple corpora and report consistent improvements especially on low resource and out-of-domain settings.

Taku Kudo• 2018

Related benchmarks

TaskDatasetResultRank
Machine TranslationWMT En-De 2014 (test)
BLEU27.82
379
Language ModelingWikiText-103 (val)
PPL106.9
180
Machine TranslationWMT De-En 14 (test)
BLEU33.65
59
Machine TranslationIWSLT En-Vi 2015 (test)
BLEU32.43
17
Machine TranslationIWSLT Fr-En 2017 (test)
BLEU38.88
14
Machine TranslationIWSLT de-en (test)
BLEU36.14
13
Compression300 MB Monolingual Corpora
Train Tokens58.7
9
Morphological AlignmentEnglish 300 MB Corpora
Morph. Score64.4
9
Named Entity RecognitionNER Average over all languages (test)
F1 Score62.8
9
Machine TranslationASPEC En-Ja
BLEU Score55.46
8
Showing 10 of 66 rows

Other info

Follow for update