Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Adversarial Prompt Injection Attack on Multimodal Large Language Models

About

Although multimodal large language models (MLLMs) are increasingly deployed in real-world applications, their instruction-following behavior leaves them vulnerable to prompt injection attacks. Existing prompt injection methods predominantly rely on textual prompts or perceptible visual prompts that are observable by human users. In this work, we study imperceptible visual prompt injection against powerful closed-source MLLMs, where adversarial instructions are embedded in the visual modality. Our method adaptively embeds the malicious prompt into the input image via a bounded text overlay to provide semantic guidance. Meanwhile, the imperceptible visual perturbation is iteratively optimized to align the feature representation of the attacked image with those of the malicious visual and textual targets at both coarse- and fine-grained levels. Specifically, the visual target is instantiated as a text-rendered image and progressively refined during optimization to more faithfully represent the desired semantics and improve transferability. Extensive experiments on two multimodal understanding tasks across multiple closed-source MLLMs demonstrate the superior performance of our approach compared to existing methods.

Meiwen Ding, Song Xia, Chenqi Kong, Xudong Jiang• 2026

Related benchmarks

TaskDatasetResultRank
VQAVQA hard criterion
ASR82
32
Image CaptioningImage Captioning Target: GPT-4o, Soft Criterion
ASR81
8
Image CaptioningImage Captioning Target: GPT-5 Soft Criterion
ASR56
8
Image CaptioningImage Captioning Target Gemini-2.5 Soft Criterion
Accuracy Score Rate (ASR)79
8
Image CaptioningImage Captioning Target: Claude-4.5, Soft Criterion
ASR8
8
Image CaptioningImage Captioning Hard Criterion GPT-4o
ASR74
8
Image CaptioningImage Captioning Hard Criterion GPT-o1
ASR53
8
Image CaptioningGemini Image Captioning Hard Criterion 1.5
ASR81
8
Image CaptioningImage Captioning Hard Criterion Claude-3.5
ASR7
8
Adversarial Prompt Injection Attack1000 images GPT-4o hard criterion
Attack Success Rate (ASR)64.7
2
Showing 10 of 12 rows

Other info

Follow for update