Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

VisualBERT: A Simple and Performant Baseline for Vision and Language

About

We propose VisualBERT, a simple and flexible framework for modeling a broad range of vision-and-language tasks. VisualBERT consists of a stack of Transformer layers that implicitly align elements of an input text and regions in an associated input image with self-attention. We further propose two visually-grounded language model objectives for pre-training VisualBERT on image caption data. Experiments on four vision-and-language tasks including VQA, VCR, NLVR2, and Flickr30K show that VisualBERT outperforms or rivals with state-of-the-art models while being significantly simpler. Further analysis demonstrates that VisualBERT can ground elements of language to image regions without any explicit supervision and is even sensitive to syntactic relationships, tracking, for example, associations between verbs and image regions corresponding to their arguments.

Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang• 2019

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2 (test-dev)
Overall Accuracy70.9
706
Natural Language UnderstandingGLUE
SST-289.4
531
Natural Language UnderstandingGLUE (dev)
SST-2 (Acc)89.4
518
Visual Question AnsweringVQA v2 (test-std)
Accuracy71
486
Natural Language UnderstandingGLUE (test)
SST-2 Accuracy90.3
416
Natural Language Visual ReasoningNLVR2 (test-p)
Accuracy74.5
346
Visual Question AnsweringVQA 2.0 (test-dev)
Accuracy70.9
337
Natural Language Visual ReasoningNLVR2 (dev)
Accuracy74.9
307
Science Question AnsweringScienceQA (test)
Average Accuracy61.87
245
Referring Expression ComprehensionRefCOCO+ (testA)
Accuracy79.5
216
Showing 10 of 72 rows
...

Other info

Follow for update