Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

"Do Anything Now": Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models

About

The misuse of large language models (LLMs) has drawn significant attention from the general public and LLM vendors. One particular type of adversarial prompt, known as jailbreak prompt, has emerged as the main attack vector to bypass the safeguards and elicit harmful content from LLMs. In this paper, employing our new framework JailbreakHub, we conduct a comprehensive analysis of 1,405 jailbreak prompts spanning from December 2022 to December 2023. We identify 131 jailbreak communities and discover unique characteristics of jailbreak prompts and their major attack strategies, such as prompt injection and privilege escalation. We also observe that jailbreak prompts increasingly shift from online Web communities to prompt-aggregation websites and 28 user accounts have consistently optimized jailbreak prompts over 100 days. To assess the potential harm caused by jailbreak prompts, we create a question set comprising 107,250 samples across 13 forbidden scenarios. Leveraging this dataset, our experiments on six popular LLMs show that their safeguards cannot adequately defend jailbreak prompts in all scenarios. Particularly, we identify five highly effective jailbreak prompts that achieve 0.95 attack success rates on ChatGPT (GPT-3.5) and GPT-4, and the earliest one has persisted online for over 240 days. We hope that our study can facilitate the research community and LLM vendors in promoting safer and regulated LLMs.

Xinyue Shen, Zeyuan Chen, Michael Backes, Yun Shen, Yang Zhang• 2023

Related benchmarks

TaskDatasetResultRank
Jailbreak AttackJailbreakBench
ASR@100.00e+0
132
JailbreakingHARMBENCH 159 standard behaviors (test)
ASR0.00e+0
51
Jailbreak AttackAdvBench (test)--
22
Adversarial Attack Success12 Adversarial Tasks
Attack Success Rate25
21
Jailbreak AttackLlama2-7b five finetuned variants
Average ASR0.00e+0
16
Jailbreak AttackDeepSeek-7b five finetuned variants
Average ASR11.6
16
Jailbreak AttackLLaMA3-8B
Average ASR8.6
16
Jailbreak Attack TransferabilityLlama-3-8b-Instruct finetuned variants v1 (test)
TSR8.6
16
Jailbreak AttackGemma-7b five finetuned variants
Average ASR6.4
16
Jailbreak Attack TransferabilityDeepSeek-llm-7b-chat finetuned variants v1 (test)
TSR11.6
16
Showing 10 of 16 rows

Other info

Follow for update