Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Large-Scale Adversarial Training for Vision-and-Language Representation Learning

About

We present VILLA, the first known effort on large-scale adversarial training for vision-and-language (V+L) representation learning. VILLA consists of two training stages: (i) task-agnostic adversarial pre-training; followed by (ii) task-specific adversarial finetuning. Instead of adding adversarial perturbations on image pixels and textual tokens, we propose to perform adversarial training in the embedding space of each modality. To enable large-scale training, we adopt the "free" adversarial training strategy, and combine it with KL-divergence-based regularization to promote higher invariance in the embedding space. We apply VILLA to current best-performing V+L models, and achieve new state of the art on a wide range of tasks, including Visual Question Answering, Visual Commonsense Reasoning, Image-Text Retrieval, Referring Expression Comprehension, Visual Entailment, and NLVR2.

Zhe Gan, Yen-Chun Chen, Linjie Li, Chen Zhu, Yu Cheng, Jingjing Liu• 2020

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2 (test-dev)
Overall Accuracy74.7
664
Visual Question AnsweringVQA v2 (test-std)
Accuracy74.9
466
Text-to-Image RetrievalFlickr30K
R@176.3
460
Image-to-Text RetrievalFlickr30K 1K (test)
R@187.9
439
Text-to-Image RetrievalFlickr30k (test)
Recall@176.3
423
Image-to-Text RetrievalFlickr30K
R@187.9
379
Text-to-Image RetrievalFlickr30K 1K (test)
R@176.3
375
Image-to-Text RetrievalFlickr30k (test)
R@187.9
370
Referring Expression ComprehensionRefCOCO+ (val)
Accuracy84.4
345
Visual Question AnsweringVQA 2.0 (test-dev)
Accuracy74.69
337
Showing 10 of 86 rows
...

Other info

Code

Follow for update