Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Unicoder-VL: A Universal Encoder for Vision and Language by Cross-modal Pre-training

About

We propose Unicoder-VL, a universal encoder that aims to learn joint representations of vision and language in a pre-training manner. Borrow ideas from cross-lingual pre-trained models, such as XLM and Unicoder, both visual and linguistic contents are fed into a multi-layer Transformer for the cross-modal pre-training, where three pre-trained tasks are employed, including Masked Language Modeling (MLM), Masked Object Classification (MOC) and Visual-linguistic Matching (VLM). The first two tasks learn context-aware representations for input tokens based on linguistic and visual contents jointly. The last task tries to predict whether an image and a text describe each other. After pretraining on large-scale image-caption pairs, we transfer Unicoder-VL to caption-based image-text retrieval and visual commonsense reasoning, with just one additional output layer. We achieve state-of-the-art or comparable results on both two tasks and show the powerful ability of the cross-modal pre-training.

Gen Li, Nan Duan, Yuejian Fang, Ming Gong, Daxin Jiang, Ming Zhou• 2019

Related benchmarks

TaskDatasetResultRank
Text-to-Image RetrievalFlickr30K
R@171.5
460
Image-to-Text RetrievalFlickr30K 1K (test)
R@186.2
439
Text-to-Image RetrievalFlickr30k (test)
Recall@171.5
423
Image-to-Text RetrievalFlickr30K
R@186.2
379
Text-to-Image RetrievalFlickr30K 1K (test)
R@171.5
375
Image-to-Text RetrievalFlickr30k (test)
R@186.2
370
Image-to-Text RetrievalMS-COCO 5K (test)
R@162.3
299
Text-to-Image RetrievalMSCOCO 5K (test)
R@162.3
286
Text-to-Image RetrievalMS-COCO 5K (test)
R@146.7
223
Image RetrievalMS-COCO 5K (test)
R@148.4
217
Showing 10 of 37 rows

Other info

Follow for update