Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs

About

Safety is critical to the usage of large language models (LLMs). Multiple techniques such as data filtering and supervised fine-tuning have been developed to strengthen LLM safety. However, currently known techniques presume that corpora used for safety alignment of LLMs are solely interpreted by semantics. This assumption, however, does not hold in real-world applications, which leads to severe vulnerabilities in LLMs. For example, users of forums often use ASCII art, a form of text-based art, to convey image information. In this paper, we propose a novel ASCII art-based jailbreak attack and introduce a comprehensive benchmark Vision-in-Text Challenge (ViTC) to evaluate the capabilities of LLMs in recognizing prompts that cannot be solely interpreted by semantics. We show that five SOTA LLMs (GPT-3.5, GPT-4, Gemini, Claude, and Llama2) struggle to recognize prompts provided in the form of ASCII art. Based on this observation, we develop the jailbreak attack ArtPrompt, which leverages the poor performance of LLMs in recognizing ASCII art to bypass safety measures and elicit undesired behaviors from LLMs. ArtPrompt only requires black-box access to the victim LLMs, making it a practical attack. We evaluate ArtPrompt on five SOTA LLMs, and show that ArtPrompt can effectively and efficiently induce undesired behaviors from all five LLMs. Our code is available at https://github.com/uw-nsl/ArtPrompt.

Fengqing Jiang, Zhangchen Xu, Luyao Niu, Zhen Xiang, Bhaskar Ramasubramanian, Bo Li, Radha Poovendran• 2024

Related benchmarks

TaskDatasetResultRank
Jailbreak AttackHarmBench
Attack Success Rate (ASR)73
376
Jailbreak AttackAdvBench
AASR8.71e+3
247
Jailbreak AttackSafeBench
ASR0.00e+0
112
Jailbreak DefenseJBB-Behaviors
ASR1
101
Jailbreak AttackJailbreakBench (JBB)
ASR27.27
54
JailbreakingAdvBench
ASR88
44
Jailbreak AttackHARMFULQA
JADES24
33
Safety EvaluationAdvBench 50 examples
Safe Response Rate96
32
JailbreakAdvBench Ensemble configuration GPT-4o
Attack Success Rate (ASR)0.00e+0
25
JailbreakAdvBench Ensemble configuration Claude-v2
Harmfulness Score (HS)1.08
15
Showing 10 of 12 rows

Other info

Follow for update