Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Failures to Find Transferable Image Jailbreaks Between Vision-Language Models

About

The integration of new modalities into frontier AI systems offers exciting capabilities, but also increases the possibility such systems can be adversarially manipulated in undesirable ways. In this work, we focus on a popular class of vision-language models (VLMs) that generate text outputs conditioned on visual and textual inputs. We conducted a large-scale empirical study to assess the transferability of gradient-based universal image ``jailbreaks" using a diverse set of over 40 open-parameter VLMs, including 18 new VLMs that we publicly release. Overall, we find that transferable gradient-based image jailbreaks are extremely difficult to obtain. When an image jailbreak is optimized against a single VLM or against an ensemble of VLMs, the jailbreak successfully jailbreaks the attacked VLM(s), but exhibits little-to-no transfer to any other VLMs; transfer is not affected by whether the attacked and target VLMs possess matching vision backbones or language models, whether the language model underwent instruction-following and/or safety-alignment training, or many other factors. Only two settings display partially successful transfer: between identically-pretrained and identically-initialized VLMs with slightly different VLM training data, and between different training checkpoints of a single VLM. Leveraging these results, we then demonstrate that transfer can be significantly improved against a specific target VLM by attacking larger ensembles of ``highly-similar" VLMs. These results stand in stark contrast to existing evidence of universal and transferable text jailbreaks against language models and transferable adversarial attacks against image classifiers, suggesting that VLMs may be more robust to gradient-based transfer attacks.

Rylan Schaeffer, Dan Valentine, Luke Bailey, James Chua, Crist\'obal Eyzaguirre, Zane Durante, Joe Benton, Brando Miranda, Henry Sleight, John Hughes, Rajashree Agrawal, Mrinank Sharma, Scott Emmons, Sanmi Koyejo, Ethan Perez• 2024

Related benchmarks

TaskDatasetResultRank
Adversarial AttackQ-Bench
Attack Success Rate43.66
37
Adversarial AttackMantis-Eval
Attack Success Rate54.68
37
Adversarial AttackBLINK
Attack Success Rate (ASR)61.77
37
Adversarial AttackMVBench
ASR59.66
37
Adversarial AttackNLVR2
Attack Success Rate24.75
37
Visual Question AnsweringMM-Vet--
27
Visual Question AnsweringMantis-Eval
ASR44.45
12
Visual Question AnsweringLLaVA-Bench
VQA ASR35.67
12
Showing 8 of 8 rows

Other info

Follow for update