Large-Scale Adversarial Training for Vision-and-Language Representation Learning
About
We present VILLA, the first known effort on large-scale adversarial training for vision-and-language (V+L) representation learning. VILLA consists of two training stages: (i) task-agnostic adversarial pre-training; followed by (ii) task-specific adversarial finetuning. Instead of adding adversarial perturbations on image pixels and textual tokens, we propose to perform adversarial training in the embedding space of each modality. To enable large-scale training, we adopt the "free" adversarial training strategy, and combine it with KL-divergence-based regularization to promote higher invariance in the embedding space. We apply VILLA to current best-performing V+L models, and achieve new state of the art on a wide range of tasks, including Visual Question Answering, Visual Commonsense Reasoning, Image-Text Retrieval, Referring Expression Comprehension, Visual Entailment, and NLVR2.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Visual Question Answering | VQA v2 (test-dev) | Overall Accuracy74.7 | 664 | |
| Visual Question Answering | VQA v2 (test-std) | Accuracy74.9 | 466 | |
| Text-to-Image Retrieval | Flickr30K | R@176.3 | 460 | |
| Image-to-Text Retrieval | Flickr30K 1K (test) | R@187.9 | 439 | |
| Text-to-Image Retrieval | Flickr30k (test) | Recall@176.3 | 423 | |
| Image-to-Text Retrieval | Flickr30K | R@187.9 | 379 | |
| Text-to-Image Retrieval | Flickr30K 1K (test) | R@176.3 | 375 | |
| Image-to-Text Retrieval | Flickr30k (test) | R@187.9 | 370 | |
| Referring Expression Comprehension | RefCOCO+ (val) | Accuracy84.4 | 345 | |
| Visual Question Answering | VQA 2.0 (test-dev) | Accuracy74.69 | 337 |