Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Good for Misconceived Reasons: An Empirical Revisiting on the Need for Visual Context in Multimodal Machine Translation

About

A neural multimodal machine translation (MMT) system is one that aims to perform better translation by extending conventional text-only translation models with multimodal information. Many recent studies report improvements when equipping their models with the multimodal module, despite the controversy of whether such improvements indeed come from the multimodal part. We revisit the contribution of multimodal information in MMT by devising two interpretable MMT models. To our surprise, although our models replicate similar gains as recently developed multimodal-integrated systems achieved, our models learn to ignore the multimodal information. Upon further investigation, we discover that the improvements achieved by the multimodal models over text-only counterparts are in fact results of the regularization effect. We report empirical findings that highlight the importance of MMT models' interpretability, and discuss how our findings will benefit future research.

Zhiyong Wu, Lingpeng Kong, Wei Bi, Xiang Li, Ben Kao• 2021

Related benchmarks

TaskDatasetResultRank
Multimodal Machine TranslationMulti30K (test)--
139
Multimodal Machine Translation (English-German)Multi30K 2016 (test)
BLEU42
52
Multimodal Machine TranslationMulti30k En-De 2017 (test)
METEOR61.94
45
Multimodal Machine TranslationMulti30k En-Fr 2017 (test)
METEOR76.34
31
Machine TranslationMulti30k En→Fr v1 2017 (test)
BLEU54.85
30
Multimodal Machine TranslationMulti30k En-Fr 2016 (test)
METEOR Score81.29
30
Machine TranslationMulti30k Task1 (en-de)
BLEU Score41.96
26
Machine TranslationMulti30K En → De (test)
METEOR46.2
26
Machine TranslationMulti30k Task1 en-fr
BLEU Score62.12
25
Machine TranslationMulti30k M30kT (test)
BLEU Score33.59
19
Showing 10 of 44 rows

Other info

Code

Follow for update