Unsupervised Vision-and-Language Pre-training Without Parallel Images and Captions
About
Pre-trained contextual vision-and-language (V&L) models have achieved impressive performance on various benchmarks. However, existing models require a large amount of parallel image-caption data for pre-training. Such data are costly to collect and require cumbersome curation. Inspired by unsupervised machine translation, we investigate if a strong V&L representation model can be learned through unsupervised pre-training without image-caption corpora. In particular, we propose to conduct ``mask-and-predict'' pre-training on text-only and image-only corpora and introduce the object tags detected by an object recognition model as anchor points to bridge two modalities. We find that such a simple approach achieves performance close to a model pre-trained with aligned data, on four English V&L benchmarks. Our work challenges the widely held notion that aligned data is necessary for V&L pre-training, while significantly reducing the amount of supervision needed for V&L models.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Visual Question Answering | VQA v2 (test-dev) | Overall Accuracy70.7 | 664 | |
| Visual Question Answering | VQA 2.0 (test-dev) | Accuracy71.8 | 337 | |
| Natural Language Visual Reasoning | NLVR2 (test-p) | Accuracy71.2 | 327 | |
| Referring Expression Comprehension | RefCOCO+ (testA) | Accuracy83.6 | 207 | |
| Visual Entailment | SNLI-VE (test) | Overall Accuracy76.8 | 197 | |
| Image Retrieval | Flickr30k (test) | R@155.4 | 195 | |
| Referring Expression Comprehension | RefCOCO+ (test-B) | Accuracy69.9 | 167 | |
| Referring Expression Comprehension | RefCOCO+ (dev) | Accuracy78.2 | 9 |