Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Modeling Coverage for Neural Machine Translation

About

Attention mechanism has enhanced state-of-the-art Neural Machine Translation (NMT) by jointly learning to align and translate. It tends to ignore past alignment information, however, which often leads to over-translation and under-translation. To address this problem, we propose coverage-based NMT in this paper. We maintain a coverage vector to keep track of the attention history. The coverage vector is fed to the attention model to help adjust future attention, which lets NMT system to consider more about untranslated source words. Experiments show that the proposed approach significantly improves both translation quality and alignment quality over standard attention-based NMT.

Zhaopeng Tu, Zhengdong Lu, Yang Liu, Xiaohua Liu, Hang Li• 2016

Related benchmarks

TaskDatasetResultRank
Machine Translation (Chinese-to-English)NIST 2003 (MT-03)
BLEU38.04
52
Machine Translation (Chinese-to-English)NIST MT-05 2005
BLEU38.73
42
Machine TranslationNIST MT 04 2004 (test)
BLEU0.4109
27
Machine TranslationNIST MT 06 2006 (test)
BLEU36.52
27
Machine Translation (Chinese-to-English)NIST MT 2004
BLEU38.34
15
Machine Translation (Chinese-to-English)NIST MT-06
BLEU34.25
15
Machine TranslationNIST Zh-En All (test)
BLEU Score39.13
10
Machine TranslationNIST 03-06 Average (test)
BLEU35.49
6
Showing 8 of 8 rows

Other info

Follow for update