Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Neural Machine Translation with Phrase-Level Universal Visual Representations

About

Multimodal machine translation (MMT) aims to improve neural machine translation (NMT) with additional visual information, but most existing MMT methods require paired input of source sentence and image, which makes them suffer from shortage of sentence-image pairs. In this paper, we propose a phrase-level retrieval-based method for MMT to get visual information for the source input from existing sentence-image data sets so that MMT can break the limitation of paired sentence-image input. Our method performs retrieval at the phrase level and hence learns visual information from pairs of source phrase and grounded region, which can mitigate data sparsity. Furthermore, our method employs the conditional variational auto-encoder to learn visual representations which can filter redundant visual information and only retain visual information related to the phrase. Experiments show that the proposed method significantly outperforms strong baselines on multiple MMT datasets, especially when the textual context is limited.

Qingkai Fang, Yang Feng• 2022

Related benchmarks

TaskDatasetResultRank
Machine TranslationMulti30k En→Fr v1 2017 (test)
BLEU46
30
Machine TranslationMulti30K En → De (test)
METEOR54.1
26
Multimodal Machine TranslationEMMT
BLEU Score41.13
18
Multi-modal Machine TranslationMulti30k WMT17 (test)
BLEU33.45
16
Multimodal Machine TranslationMulti30K 2016 (test)
BLEU40.3
11
Machine TranslationMulti30K En → Fr (test)
BLEU52.3
9
Machine TranslationWMT (test)
En-De Score28.5
7
Unsupervised Multimodal Machine TranslationMulti30K En-De and De-En (test)
Avg. BLEU35
4
Showing 8 of 8 rows

Other info

Follow for update