Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VisualBERT: A Simple and Performant Baseline for Vision and Language

About

We propose VisualBERT, a simple and flexible framework for modeling a broad range of vision-and-language tasks. VisualBERT consists of a stack of Transformer layers that implicitly align elements of an input text and regions in an associated input image with self-attention. We further propose two visually-grounded language model objectives for pre-training VisualBERT on image caption data. Experiments on four vision-and-language tasks including VQA, VCR, NLVR2, and Flickr30K show that VisualBERT outperforms or rivals with state-of-the-art models while being significantly simpler. Further analysis demonstrates that VisualBERT can ground elements of language to image regions without any explicit supervision and is even sensitive to syntactic relationships, tracking, for example, associations between verbs and image regions corresponding to their arguments.

Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang• 2019

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2 (test-dev)
Overall Accuracy70.9
664
Natural Language UnderstandingGLUE (dev)
SST-2 (Acc)89.4
504
Visual Question AnsweringVQA v2 (test-std)
Accuracy71
466
Natural Language UnderstandingGLUE
SST-289.4
452
Natural Language UnderstandingGLUE (test)
SST-2 Accuracy90.3
416
Visual Question AnsweringVQA 2.0 (test-dev)
Accuracy70.9
337
Natural Language Visual ReasoningNLVR2 (test-p)
Accuracy74.5
327
Natural Language Visual ReasoningNLVR2 (dev)
Accuracy74.9
288
Science Question AnsweringScienceQA (test)
Average Accuracy61.87
208
Referring Expression ComprehensionRefCOCO+ (testA)
Accuracy79.5
207
Showing 10 of 69 rows

Other info

Follow for update