Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Towards Adversarial Attack on Vision-Language Pre-training Models

About

While vision-language pre-training model (VLP) has shown revolutionary improvements on various vision-language (V+L) tasks, the studies regarding its adversarial robustness remain largely unexplored. This paper studied the adversarial attack on popular VLP models and V+L tasks. First, we analyzed the performance of adversarial attacks under different settings. By examining the influence of different perturbed objects and attack targets, we concluded some key observations as guidance on both designing strong multimodal adversarial attack and constructing robust VLP models. Second, we proposed a novel multimodal attack method on the VLP models called Collaborative Multimodal Adversarial Attack (Co-Attack), which collectively carries out the attacks on the image modality and the text modality. Experimental results demonstrated that the proposed method achieves improved attack performances on different V+L downstream tasks and VLP models. The analysis observations and novel attack method hopefully provide new understanding into the adversarial robustness of VLP models, so as to contribute their safe and reliable deployment in more real-world scenarios. Code is available at https://github.com/adversarial-for-goodness/Co-Attack.

Jiaming Zhang, Qi Yi, Jitao Sang• 2022

Related benchmarks

TaskDatasetResultRank
Text-to-Video RetrievalDiDeMo (test)
R@14.48
399
Text RetrievalFlickr30k (test)
R@1 (ASR)97.08
340
Text-to-Video RetrievalMSR-VTT (test)
R@117.69
255
Image RetrievalFlickr30k (test)--
210
Text-to-Video RetrievalMSR-VTT 1K (test)
R@175.04
65
Visual ReasoningNLVR2--
49
Text RetrievalMSCOCO
ASR@R194.95
33
Video RetrievalMSR-VTT
R@138.61
31
Visual EntailmentSNLI-VE
Accuracy0.1866
24
Text-to-Video RetrievalDiDeMo 1K videos (test)
R@15.42
21
Showing 10 of 14 rows

Other info

Follow for update