Image Pivoting for Learning Multilingual Multimodal Representations
About
In this paper we propose a model to learn multimodal multilingual representations for matching images and sentences in different languages, with the aim of advancing multilingual versions of image search and image understanding. Our model learns a common representation for images and their descriptions in two different languages (which need not be parallel) by considering the image as a pivot between two languages. We introduce a new pairwise ranking loss function which can handle both symmetric and asymmetric similarity between the two modalities. We evaluate our models on image-description ranking for German and English, and on semantic textual similarity of image descriptions in English. In both cases we achieve state-of-the-art performance.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image-to-Text Retrieval | COCO-CN | -- | 48 | |
| Image-Text Retrieval | MSCOCO (test) | EN Retrieval Score78.3 | 28 | |
| Image-Text Retrieval | Flickr30k (test) | -- | 21 |