Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Neural Machine Translation with Adequacy-Oriented Learning

About

Although Neural Machine Translation (NMT) models have advanced state-of-the-art performance in machine translation, they face problems like the inadequate translation. We attribute this to that the standard Maximum Likelihood Estimation (MLE) cannot judge the real translation quality due to its several limitations. In this work, we propose an adequacy-oriented learning mechanism for NMT by casting translation as a stochastic policy in Reinforcement Learning (RL), where the reward is estimated by explicitly measuring translation adequacy. Benefiting from the sequence-level training of RL strategy and a more accurate reward designed specifically for translation, our model outperforms multiple strong baselines, including (1) standard and coverage-augmented attention models with MLE-based training, and (2) advanced reinforcement and adversarial training strategies with rewards based on both word-level BLEU and character-level chrF3. Quantitative and qualitative analyses on different language pairs and NMT architectures demonstrate the effectiveness and universality of the proposed approach.

Xiang Kong, Zhaopeng Tu, Shuming Shi, Eduard Hovy, Tong Zhang• 2018

Related benchmarks

TaskDatasetResultRank
Machine TranslationWMT En-De 2014 (test)
BLEU28.99
379
Machine TranslationIWSLT De-En 2014 (test)
BLEU27.79
146
Machine Translation (Chinese-to-English)NIST 2003 (MT-03)
BLEU38.62
52
Machine Translation (Chinese-to-English)NIST MT-05 2005
BLEU39.39
42
Machine TranslationNIST MT 04 2004 (test)
BLEU0.4198
27
Machine TranslationNIST MT 06 2006 (test)
BLEU37.54
27
Machine TranslationNIST Zh-En All (test)
BLEU Score39.81
10
Showing 7 of 7 rows

Other info

Follow for update