Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

On Evaluating Adversarial Robustness of Large Vision-Language Models

About

Large vision-language models (VLMs) such as GPT-4 have achieved unprecedented performance in response generation, especially with visual inputs, enabling more creative and adaptable interaction than large language models such as ChatGPT. Nonetheless, multimodal generation exacerbates safety concerns, since adversaries may successfully evade the entire system by subtly manipulating the most vulnerable modality (e.g., vision). To this end, we propose evaluating the robustness of open-source large VLMs in the most realistic and high-risk setting, where adversaries have only black-box system access and seek to deceive the model into returning the targeted responses. In particular, we first craft targeted adversarial examples against pretrained models such as CLIP and BLIP, and then transfer these adversarial examples to other VLMs such as MiniGPT-4, LLaVA, UniDiffuser, BLIP-2, and Img2Prompt. In addition, we observe that black-box queries on these VLMs can further improve the effectiveness of targeted evasion, resulting in a surprisingly high success rate for generating targeted responses. Our findings provide a quantitative understanding regarding the adversarial vulnerability of large VLMs and call for a more thorough examination of their potential security flaws before deployment in practice. Code is at https://github.com/yunqing-me/AttackVLM.

Yunqing Zhao, Tianyu Pang, Chao Du, Xiao Yang, Chongxuan Li, Ngai-Man Cheung, Min Lin• 2023

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE--
935
Image CaptioningCOCO
CIDEr46.6
116
Black-box AttackVLM Evaluation Set (test)
Ensemble Success Rate65.99
96
Image CaptioningFlickr30K
CIDEr30.2
55
Adversarial AttackLVLM Evaluation Set
ASR7.6
40
Visual Question AnsweringTextVQA
VQA Accuracy19.7
33
Multi-task Adversarial Attack EvaluationCOCO, Flickr30k, TextVQA, VQAv2, POPE
Average SRR59.8
33
Image Captioning RobustnessImage Captioning Dataset
CLIP Score (RN-50)78
30
Adversarial AttackChestMNIST (test)
KMRa0.05
15
Black-Box LVLM AttackPatternNet
KMRa9
15
Showing 10 of 25 rows

Other info

Code

Follow for update