Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Towards Adversarial Attack on Vision-Language Pre-training Models

About

While vision-language pre-training model (VLP) has shown revolutionary improvements on various vision-language (V+L) tasks, the studies regarding its adversarial robustness remain largely unexplored. This paper studied the adversarial attack on popular VLP models and V+L tasks. First, we analyzed the performance of adversarial attacks under different settings. By examining the influence of different perturbed objects and attack targets, we concluded some key observations as guidance on both designing strong multimodal adversarial attack and constructing robust VLP models. Second, we proposed a novel multimodal attack method on the VLP models called Collaborative Multimodal Adversarial Attack (Co-Attack), which collectively carries out the attacks on the image modality and the text modality. Experimental results demonstrated that the proposed method achieves improved attack performances on different V+L downstream tasks and VLP models. The analysis observations and novel attack method hopefully provide new understanding into the adversarial robustness of VLP models, so as to contribute their safe and reliable deployment in more real-world scenarios. Code is available at https://github.com/adversarial-for-goodness/Co-Attack.

Jiaming Zhang, Qi Yi, Jitao Sang• 2022

Related benchmarks

TaskDatasetResultRank
Visual ReasoningNLVR2--
49
Visual EntailmentSNLI-VE
Accuracy0.1866
24
RECRefCOCO+
ASR68.69
16
RECRefCOCOg
ASR65.5
16
Showing 4 of 4 rows

Other info

Follow for update