Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Jailbreaking Black Box Large Language Models in Twenty Queries

About

There is growing interest in ensuring that large language models (LLMs) align with human values. However, the alignment of such models is vulnerable to adversarial jailbreaks, which coax LLMs into overriding their safety guardrails. The identification of these vulnerabilities is therefore instrumental in understanding inherent weaknesses and preventing future misuse. To this end, we propose Prompt Automatic Iterative Refinement (PAIR), an algorithm that generates semantic jailbreaks with only black-box access to an LLM. PAIR -- which is inspired by social engineering attacks -- uses an attacker LLM to automatically generate jailbreaks for a separate targeted LLM without human intervention. In this way, the attacker LLM iteratively queries the target LLM to update and refine a candidate jailbreak. Empirically, PAIR often requires fewer than twenty queries to produce a jailbreak, which is orders of magnitude more efficient than existing algorithms. PAIR also achieves competitive jailbreaking success rates and transferability on open and closed-source LLMs, including GPT-3.5/4, Vicuna, and Gemini.

Patrick Chao, Alexander Robey, Edgar Dobriban, Hamed Hassani, George J. Pappas, Eric Wong• 2023

Related benchmarks

TaskDatasetResultRank
Jailbreak AttackHarmBench
Attack Success Rate (ASR)74.5
487
Jailbreak AttackAdvBench
AASR98.3
263
Jailbreak AttackMaliciousInstruct
ASR91
161
Jailbreak AttackJailbreakBench
ASR@106
132
Jailbreak AttackSafeBench
ASR34
128
JailbreakingAdvBench
ASR90
114
Jailbreak AttackJailbreakBench
ASR71
76
JailbreakJBB-Behaviors utilitarian dilemmas (test)
Jailbreak Success Rate76
72
JailbreakAdvBench
Avg Queries20.3
63
Jailbreak AttackJailbreakBench (JBB)--
62
Showing 10 of 123 rows
...

Other info

Follow for update