Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

On the Robustness of Large Multimodal Models Against Image Adversarial Attacks

About

Recent advances in instruction tuning have led to the development of State-of-the-Art Large Multimodal Models (LMMs). Given the novelty of these models, the impact of visual adversarial attacks on LMMs has not been thoroughly examined. We conduct a comprehensive study of the robustness of various LMMs against different adversarial attacks, evaluated across tasks including image classification, image captioning, and Visual Question Answer (VQA). We find that in general LMMs are not robust to visual adversarial inputs. However, our findings suggest that context provided to the model via prompts, such as questions in a QA pair helps to mitigate the effects of visual adversarial inputs. Notably, the LMMs evaluated demonstrated remarkable resilience to such attacks on the ScienceQA task with only an 8.10% drop in performance compared to their visual counterparts which dropped 99.73%. We also propose a new approach to real-world image classification which we term query decomposition. By incorporating existence queries into our input prompt we observe diminished attack effectiveness and improvements in image classification accuracy. This research highlights a previously under-explored facet of LMM robustness and sets the stage for future work aimed at strengthening the resilience of multimodal systems in adversarial environments.

Xuanming Cui, Alejandro Aparcedo, Young Kyun Jang, Ser-Nam Lim• 2023

Related benchmarks

TaskDatasetResultRank
Adversarial AttackLVLM Evaluation Set
ASR97
40
Black-box Adversarial AttackImageNet (test)
Success Rate (Res34)98.7
13
Image ClassificationCIFAR-10 (test)
CIFAR-10 Classification Score98.4
9
Adversarial Attackllava
CLIP Similarity (RN-50)0.2365
9
Adversarial AttackQwen VL 2.5
CLIP Similarity (RN-50)0.253
9
Adversarial AttackGemini 2.0
CLIP Similarity (RN-50)0.2564
9
Adversarial Attack ImperceptibilityAdversarial Attack (Evaluation Set)
SSIM0.8978
9
Image ClassificationCIFAR-10 BLIP-2
CLIP Similarity (RN-50)0.2335
9
Image ClassificationCIFAR-10 Kimi-VL
CLIP Similarity (RN-50)0.2383
9
Image ClassificationCIFAR-10 OpenFlamingo
CLIP Similarity (RN-50)0.1598
9
Showing 10 of 13 rows

Other info

Follow for update