Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Robust Prompt Optimization for Defending Language Models Against Jailbreaking Attacks

About

Despite advances in AI alignment, large language models (LLMs) remain vulnerable to adversarial attacks or jailbreaking, in which adversaries can modify prompts to induce unwanted behavior. While some defenses have been proposed, they have not been adapted to newly proposed attacks and more challenging threat models. To address this, we propose an optimization-based objective for defending LLMs against jailbreaking attacks and an algorithm, Robust Prompt Optimization (RPO) to create robust system-level defenses. Our approach directly incorporates the adversary into the defensive objective and optimizes a lightweight and transferable suffix, enabling RPO to adapt to worst-case adaptive attacks. Our theoretical and experimental results show improved robustness to both jailbreaks seen during optimization and unknown jailbreaks, reducing the attack success rate (ASR) on GPT-4 to 6% and Llama-2 to 0% on JailbreakBench, setting the state-of-the-art. Code can be found at https://github.com/lapisrocks/rpo

Andy Zhou, Bo Li, Haohan Wang• 2024

Related benchmarks

TaskDatasetResultRank
Deceptive DefenseEMRA (test)
MTA (Average)0.057
18
Jailbreak Defense EvaluationEMRA JQ
Attack Success Rate (ASR)2.3
18
Jailbreak Defense EvaluationEMRA HQ
ASR0.3
18
Jailbreak Defense EvaluationEMRA MTA
ASR7.8
18
Red-Teaming (Attack Success Rate)JailbreakBench (test)
ASR (Vicuna)20
18
Jailbreak Defense EvaluationEMRA RQ
ASR16.8
18
Jailbreak AttackJailbreakBench PAIR
Attack Success Rate (ASR)20
15
JailbreakingHarmBench Transfer attack
Average Success Rate29.6
14
Jailbreak AttackJailbreakBench GCG
ASR0.01
10
Showing 9 of 9 rows

Other info

Follow for update