Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

How Robust is Google's Bard to Adversarial Image Attacks?

About

Multimodal Large Language Models (MLLMs) that integrate text and other modalities (especially vision) have achieved unprecedented performance in various multimodal tasks. However, due to the unsolved adversarial robustness problem of vision models, MLLMs can have more severe safety and security risks by introducing the vision inputs. In this work, we study the adversarial robustness of Google's Bard, a competitive chatbot to ChatGPT that released its multimodal capability recently, to better understand the vulnerabilities of commercial MLLMs. By attacking white-box surrogate vision encoders or MLLMs, the generated adversarial examples can mislead Bard to output wrong image descriptions with a 22% success rate based solely on the transferability. We show that the adversarial examples can also attack other MLLMs, e.g., a 26% attack success rate against Bing Chat and a 86% attack success rate against ERNIE bot. Moreover, we identify two defense mechanisms of Bard, including face detection and toxicity detection of images. We design corresponding attacks to evade these defenses, demonstrating that the current defenses of Bard are also vulnerable. We hope this work can deepen our understanding on the robustness of MLLMs and facilitate future research on defenses. Our code is available at https://github.com/thu-ml/Attack-Bard. Update: GPT-4V is available at October 2023. We further evaluate its robustness under the same set of adversarial examples, achieving a 45% attack success rate.

Yinpeng Dong, Huanran Chen, Jiawei Chen, Zhengwei Fang, Xiao Yang, Yichi Zhang, Yu Tian, Hang Su, Jun Zhu• 2023

Related benchmarks

TaskDatasetResultRank
Adversarial AttackLVLM Evaluation Set
ASR97.8
40
Image Captioning RobustnessImage Captioning Dataset
CLIP Score (RN-50)53.2
30
Adversarial AttackChestMNIST (test)
KMRa0.04
15
Black-Box LVLM AttackPatternNet
KMRa8
15
Adversarial AttackGPT-4o
CLIP Similarity (RN-50)0.2571
9
Black-box Adversarial AttackClaude thinking 4.0
KMR (a)0.03
9
Adversarial AttackQwen VL 2.5
CLIP Similarity (RN-50)0.2524
9
Adversarial AttackGemini 2.0
CLIP Similarity (RN-50)0.2562
9
Black-box Adversarial AttackGPT-5
KMRa8
9
Image ClassificationCIFAR-10 InternVL3
CLIP Similarity (RN-50)0.2535
9
Showing 10 of 21 rows

Other info

Follow for update