Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Variational Neural Machine Translation

About

Models of neural machine translation are often from a discriminative family of encoderdecoders that learn a conditional distribution of a target sentence given a source sentence. In this paper, we propose a variational model to learn this conditional distribution for neural machine translation: a variational encoderdecoder model that can be trained end-to-end. Different from the vanilla encoder-decoder model that generates target translations from hidden representations of source sentences alone, the variational model introduces a continuous latent variable to explicitly model underlying semantics of source sentences and to guide the generation of target translations. In order to perform efficient posterior inference and large-scale training, we build a neural posterior approximator conditioned on both the source and the target sides, and equip it with a reparameterization technique to estimate the variational lower bound. Experiments on both Chinese-English and English- German translation tasks show that the proposed variational neural machine translation achieves significant improvements over the vanilla neural machine translation baselines.

Biao Zhang, Deyi Xiong, Jinsong Su, Hong Duan, Min Zhang• 2016

Related benchmarks

TaskDatasetResultRank
Machine TranslationBMELD (En=>Ch) (test)
BLEU27.52
28
Machine TranslationBConTrasT De=>En (test)
BLEU60.01
28
Machine TranslationBMELD Ch=>En (test)
BLEU22.24
28
Machine TranslationEn -> De (test)
BLEU Score58.74
23
Dialogue CoherenceDe-En Base (test)
1st Precision0.6602
7
Human Evaluation of Machine TranslationCh⇒En Base (test)
Preference Score0.535
6
Showing 6 of 6 rows

Other info

Follow for update