Unicoder-VL: A Universal Encoder for Vision and Language by Cross-modal Pre-training
About
We propose Unicoder-VL, a universal encoder that aims to learn joint representations of vision and language in a pre-training manner. Borrow ideas from cross-lingual pre-trained models, such as XLM and Unicoder, both visual and linguistic contents are fed into a multi-layer Transformer for the cross-modal pre-training, where three pre-trained tasks are employed, including Masked Language Modeling (MLM), Masked Object Classification (MOC) and Visual-linguistic Matching (VLM). The first two tasks learn context-aware representations for input tokens based on linguistic and visual contents jointly. The last task tries to predict whether an image and a text describe each other. After pretraining on large-scale image-caption pairs, we transfer Unicoder-VL to caption-based image-text retrieval and visual commonsense reasoning, with just one additional output layer. We achieve state-of-the-art or comparable results on both two tasks and show the powerful ability of the cross-modal pre-training.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-to-Image Retrieval | Flickr30K | R@171.5 | 460 | |
| Image-to-Text Retrieval | Flickr30K 1K (test) | R@186.2 | 439 | |
| Text-to-Image Retrieval | Flickr30k (test) | Recall@171.5 | 423 | |
| Image-to-Text Retrieval | Flickr30K | R@186.2 | 379 | |
| Text-to-Image Retrieval | Flickr30K 1K (test) | R@171.5 | 375 | |
| Image-to-Text Retrieval | Flickr30k (test) | R@186.2 | 370 | |
| Image-to-Text Retrieval | MS-COCO 5K (test) | R@162.3 | 299 | |
| Text-to-Image Retrieval | MSCOCO 5K (test) | R@162.3 | 286 | |
| Text-to-Image Retrieval | MS-COCO 5K (test) | R@146.7 | 223 | |
| Image Retrieval | MS-COCO 5K (test) | R@148.4 | 217 |