Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

On the Adversarial Robustness of Multi-Modal Foundation Models

About

Multi-modal foundation models combining vision and language models such as Flamingo or GPT-4 have recently gained enormous interest. Alignment of foundation models is used to prevent models from providing toxic or harmful output. While malicious users have successfully tried to jailbreak foundation models, an equally important question is if honest users could be harmed by malicious third-party content. In this paper we show that imperceivable attacks on images in order to change the caption output of a multi-modal foundation model can be used by malicious content providers to harm honest users e.g. by guiding them to malicious websites or broadcast fake information. This indicates that countermeasures to adversarial attacks should be used by any deployed multi-modal foundation model.

Christian Schlarmann, Matthias Hein• 2023

Related benchmarks

TaskDatasetResultRank
Adversarial AttackLVLM Evaluation Set
ASR84.4
40
Image ClassificationCIFAR-10 OpenFlamingo
CLIP Similarity (RN-50)0.2272
9
Adversarial Attackllava
CLIP Similarity (RN-50)0.2376
9
Adversarial Attack ImperceptibilityAdversarial Attack (Evaluation Set)
SSIM0.9063
9
Image ClassificationCIFAR-10 InternVL3
CLIP Similarity (RN-50)0.2552
9
Adversarial AttackGPT-4o
CLIP Similarity (RN-50)0.2561
9
Image ClassificationCIFAR-10 (test)
CIFAR-10 Classification Score82.8
9
Adversarial AttackQwen VL 2.5
CLIP Similarity (RN-50)0.2523
9
Image ClassificationCIFAR-10 Kimi-VL
CLIP Similarity (RN-50)0.2382
9
Adversarial AttackGemini 2.0
CLIP Similarity (RN-50)0.2539
9
Showing 10 of 12 rows

Other info

Follow for update