Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision

About

Vision-and-Language Pre-training (VLP) has improved performance on various joint vision-and-language downstream tasks. Current approaches to VLP heavily rely on image feature extraction processes, most of which involve region supervision (e.g., object detection) and the convolutional architecture (e.g., ResNet). Although disregarded in the literature, we find it problematic in terms of both (1) efficiency/speed, that simply extracting input features requires much more computation than the multimodal interaction steps; and (2) expressive power, as it is upper bounded to the expressive power of the visual embedder and its predefined visual vocabulary. In this paper, we present a minimal VLP model, Vision-and-Language Transformer (ViLT), monolithic in the sense that the processing of visual inputs is drastically simplified to just the same convolution-free manner that we process textual inputs. We show that ViLT is up to tens of times faster than previous VLP models, yet with competitive or better downstream task performance. Our code and pre-trained weights are available at https://github.com/dandelin/vilt.

Wonjae Kim, Bokyung Son, Ildoo Kim• 2021

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2 (test-dev)
Overall Accuracy71.3
664
Text-to-Image RetrievalFlickr30K
R@161.5
460
Image-to-Text RetrievalFlickr30K 1K (test)
R@183.7
439
Text-to-Image RetrievalFlickr30k (test)
Recall@164.4
423
Image-to-Text RetrievalFlickr30K
R@174.8
379
Text-to-Image RetrievalFlickr30K 1K (test)
R@164.4
375
Visual Question AnsweringVQA 2.0 (test-dev)
Accuracy71.3
337
Natural Language Visual ReasoningNLVR2 (test-p)
Accuracy76.21
327
Image-to-Text RetrievalMS-COCO 5K (test)
R@161.8
299
Natural Language Visual ReasoningNLVR2 (dev)
Accuracy75.7
288
Showing 10 of 111 rows
...

Other info

Code

Follow for update