Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VL-BEiT: Generative Vision-Language Pretraining

About

We introduce a vision-language foundation model called VL-BEiT, which is a bidirectional multimodal Transformer learned by generative pretraining. Our minimalist solution conducts masked prediction on both monomodal and multimodal data with a shared Transformer. Specifically, we perform masked vision-language modeling on image-text pairs, masked language modeling on texts, and masked image modeling on images. VL-BEiT is learned from scratch with one unified pretraining task, one shared backbone, and one-stage training. Our method is conceptually simple and empirically effective. Experimental results show that VL-BEiT obtains strong results on various vision-language benchmarks, such as visual question answering, visual reasoning, and image-text retrieval. Moreover, our method learns transferable visual features, achieving competitive performance on image classification, and semantic segmentation.

Hangbo Bao, Wenhui Wang, Li Dong, Furu Wei• 2022

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2 (test-dev)
Overall Accuracy77.5
664
Visual Question AnsweringVQA v2 (test-std)
Accuracy77.8
466
Image-to-Text RetrievalFlickr30K 1K (test)
R@195.8
439
Text-to-Image RetrievalFlickr30K 1K (test)
R@183.9
375
Natural Language Visual ReasoningNLVR2 (test-p)
Accuracy82.7
327
Natural Language Visual ReasoningNLVR2 (dev)
Accuracy81.9
288
Text-to-Image RetrievalMSCOCO 5K (test)
R@161.5
286
Image RetrievalMS-COCO 5K (test)
R@161.5
217
Text RetrievalMS-COCO 5K (test)
R@179.5
182
Text RetrievalFlickr30K 1K (test)
R@195.8
82
Showing 10 of 13 rows

Other info

Follow for update