Seeing Out of tHe bOx: End-to-End Pre-training for Vision-Language Representation Learning
About
We study joint learning of Convolutional Neural Network (CNN) and Transformer for vision-language pre-training (VLPT) which aims to learn cross-modal alignments from millions of image-text pairs. State-of-the-art approaches extract salient image regions and align regions with words step-by-step. As region-based visual features usually represent parts of an image, it is challenging for existing vision-language models to fully understand the semantics from paired natural languages. In this paper, we propose SOHO to "See Out of tHe bOx" that takes a whole image as input, and learns vision-language representation in an end-to-end manner. SOHO does not require bounding box annotations which enables inference 10 times faster than region-based approaches. In particular, SOHO learns to extract comprehensive yet compact image features through a visual dictionary (VD) that facilitates cross-modal understanding. VD is designed to represent consistent visual abstractions of similar semantics. It is updated on-the-fly and utilized in our proposed pre-training task Masked Visual Modeling (MVM). We conduct experiments on four well-established vision-language tasks by following standard VLPT settings. In particular, SOHO achieves absolute gains of 2.0% R@1 score on MSCOCO text retrieval 5k test split, 1.5% accuracy on NLVR$^2$ test-P split, 6.7% accuracy on SNLI-VE test split, respectively.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Visual Question Answering | VQA v2 (test-dev) | Overall Accuracy73.25 | 664 | |
| Visual Question Answering | VQA v2 (test-std) | Accuracy73.47 | 466 | |
| Text-to-Image Retrieval | Flickr30k (test) | Recall@172.5 | 423 | |
| Image-to-Text Retrieval | Flickr30k (test) | R@186.5 | 370 | |
| Visual Question Answering | VQA 2.0 (test-dev) | Accuracy73.25 | 337 | |
| Natural Language Visual Reasoning | NLVR2 (test-p) | Accuracy77.32 | 327 | |
| Image-to-Text Retrieval | MS-COCO 5K (test) | R@166.4 | 299 | |
| Natural Language Visual Reasoning | NLVR2 (dev) | Accuracy76.37 | 288 | |
| Text-to-Image Retrieval | MSCOCO 5K (test) | R@166.4 | 286 | |
| Image Retrieval | MS-COCO 5K (test) | R@150.6 | 217 |