Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Unsupervised Vision-and-Language Pre-training Without Parallel Images and Captions

About

Pre-trained contextual vision-and-language (V&L) models have achieved impressive performance on various benchmarks. However, existing models require a large amount of parallel image-caption data for pre-training. Such data are costly to collect and require cumbersome curation. Inspired by unsupervised machine translation, we investigate if a strong V&L representation model can be learned through unsupervised pre-training without image-caption corpora. In particular, we propose to conduct ``mask-and-predict'' pre-training on text-only and image-only corpora and introduce the object tags detected by an object recognition model as anchor points to bridge two modalities. We find that such a simple approach achieves performance close to a model pre-trained with aligned data, on four English V&L benchmarks. Our work challenges the widely held notion that aligned data is necessary for V&L pre-training, while significantly reducing the amount of supervision needed for V&L models.

Liunian Harold Li, Haoxuan You, Zhecan Wang, Alireza Zareian, Shih-Fu Chang, Kai-Wei Chang• 2020

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2 (test-dev)
Overall Accuracy70.7
664
Visual Question AnsweringVQA 2.0 (test-dev)
Accuracy71.8
337
Natural Language Visual ReasoningNLVR2 (test-p)
Accuracy71.2
327
Referring Expression ComprehensionRefCOCO+ (testA)
Accuracy83.6
207
Visual EntailmentSNLI-VE (test)
Overall Accuracy76.8
197
Image RetrievalFlickr30k (test)
R@155.4
195
Referring Expression ComprehensionRefCOCO+ (test-B)
Accuracy69.9
167
Referring Expression ComprehensionRefCOCO+ (dev)
Accuracy78.2
9
Showing 8 of 8 rows

Other info

Follow for update