Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Multi-Turn Adaptive Prompting Attack on Large Vision-Language Models

About

Multi-turn jailbreak attacks are effective against text-only large language models (LLMs) by gradually introducing malicious content across turns. When extended to large vision-language models (LVLMs), we find that naively adding visual inputs can cause existing multi-turn jailbreaks to be easily defended. For example, overly malicious visual input will easily trigger the defense mechanism of safety-aligned LVLMs, making the response more conservative. To address this, we propose MAPA: a multi-turn adaptive prompting attack that 1) at each turn, alternates text-vision attack actions to elicit the most malicious response; and 2) across turns, adjusts the attack trajectory through iterative back-and-forth refinement to gradually amplify response maliciousness. This two-level design enables MAPA to consistently outperform state-of-the-art methods, improving attack success rates by 11-35% on recent benchmarks against LLaVA-V1.6-Mistral-7B, Qwen2.5-VL-7B-Instruct, Llama-3.2-Vision-11B-Instruct and GPT-4o-mini.

In Chong Choi, Jiacheng Zhang, Feng Liu, Yiliao Song• 2026

Related benchmarks

TaskDatasetResultRank
Jailbreak AttackHarmBench--
376
Jailbreak AttackAdvBench
AASR98.96
247
Jailbreak AttackJailbreakBench
ASR93.33
54
Jailbreak AttackRedTeam 2K
ASR94.79
16
Jailbreak AttackJailbreak Evaluation GPT-4o-mini
ASR93.33
13
Showing 5 of 5 rows

Other info

Follow for update