Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Tree of Attacks: Jailbreaking Black-Box LLMs Automatically

About

While Large Language Models (LLMs) display versatile functionality, they continue to generate harmful, biased, and toxic content, as demonstrated by the prevalence of human-designed jailbreaks. In this work, we present Tree of Attacks with Pruning (TAP), an automated method for generating jailbreaks that only requires black-box access to the target LLM. TAP utilizes an attacker LLM to iteratively refine candidate (attack) prompts until one of the refined prompts jailbreaks the target. In addition, before sending prompts to the target, TAP assesses them and prunes the ones unlikely to result in jailbreaks, reducing the number of queries sent to the target LLM. In empirical evaluations, we observe that TAP generates prompts that jailbreak state-of-the-art LLMs (including GPT4-Turbo and GPT4o) for more than 80% of the prompts. This significantly improves upon the previous state-of-the-art black-box methods for generating jailbreaks while using a smaller number of queries than them. Furthermore, TAP is also capable of jailbreaking LLMs protected by state-of-the-art guardrails, e.g., LlamaGuard.

Anay Mehrotra, Manolis Zampetakis, Paul Kassianik, Blaine Nelson, Hyrum Anderson, Yaron Singer, Amin Karbasi• 2023

Related benchmarks

TaskDatasetResultRank
Jailbreak AttackHarmBench
Attack Success Rate (ASR)77
376
Jailbreak AttackAdvBench
AASR5.23e+3
247
Jailbreak AttackJailbreakBench
ASR@101
132
JailbreakAdvBench
Avg Queries12.3
63
Jailbreak AttackJailbreakBench (JBB)--
54
JailbreakHarmBench Standard Behaviours (200 examples)
ASR5.5
48
Jailbreak AttackHARMFULQA
JADES37
33
Jailbreak AttackAdvBench AdvSub
QSR42
30
Jailbreak AttackMI (MaliciousInstructions)
QSR0.37
30
Jailbreak AttackAdvBench Claude-3.5-Sonnet
ASR28
7
Showing 10 of 18 rows

Other info

Follow for update