Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks

About

Large-scale pre-training methods of learning cross-modal representations on image-text pairs are becoming popular for vision-language tasks. While existing methods simply concatenate image region features and text features as input to the model to be pre-trained and use self-attention to learn image-text semantic alignments in a brute force manner, in this paper, we propose a new learning method Oscar (Object-Semantics Aligned Pre-training), which uses object tags detected in images as anchor points to significantly ease the learning of alignments. Our method is motivated by the observation that the salient objects in an image can be accurately detected, and are often mentioned in the paired text. We pre-train an Oscar model on the public corpus of 6.5 million text-image pairs, and fine-tune it on downstream tasks, creating new state-of-the-arts on six well-established vision-language understanding and generation tasks.

Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, Yejin Choi, Jianfeng Gao• 2020

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringGQA
Accuracy61.6
1249
Visual Question AnsweringVQA v2 (test-dev)
Overall Accuracy73.61
706
Image CaptioningMS COCO Karpathy (test)
CIDEr127.8
682
Visual Question AnsweringVQA v2 (test-std)
Accuracy76.12
486
Text-to-Image RetrievalFlickr30k (test)
Recall@175.9
445
Natural Language UnderstandingGLUE (test)
SST-2 Accuracy87.3
416
Natural Language Visual ReasoningNLVR2 (test-p)
Accuracy80.37
346
Visual Question AnsweringVQA 2.0 (test-dev)
Accuracy75.95
337
Image-to-Text RetrievalMS-COCO 5K (test)
R@173.5
320
Text-to-Image RetrievalMSCOCO 5K (test)
R@159.9
308
Showing 10 of 93 rows
...

Other info

Code

Follow for update