Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Robust Prompt Optimization for Defending Language Models Against Jailbreaking Attacks

About

Despite advances in AI alignment, large language models (LLMs) remain vulnerable to adversarial attacks or jailbreaking, in which adversaries can modify prompts to induce unwanted behavior. While some defenses have been proposed, they have not been adapted to newly proposed attacks and more challenging threat models. To address this, we propose an optimization-based objective for defending LLMs against jailbreaking attacks and an algorithm, Robust Prompt Optimization (RPO) to create robust system-level defenses. Our approach directly incorporates the adversary into the defensive objective and optimizes a lightweight and transferable suffix, enabling RPO to adapt to worst-case adaptive attacks. Our theoretical and experimental results show improved robustness to both jailbreaks seen during optimization and unknown jailbreaks, reducing the attack success rate (ASR) on GPT-4 to 6% and Llama-2 to 0% on JailbreakBench, setting the state-of-the-art. Code can be found at https://github.com/lapisrocks/rpo

Andy Zhou, Bo Li, Haohan Wang• 2024

Related benchmarks

TaskDatasetResultRank
Red-Teaming (Attack Success Rate)JailbreakBench (test)
ASR (Vicuna)20
18
Jailbreak AttackJailbreakBench PAIR
Attack Success Rate (ASR)20
10
Jailbreak AttackJailbreakBench GCG
ASR0.01
10
JailbreakingHarmBench Transfer attack
GCG Success Rate17.8
8
Showing 4 of 4 rows

Other info

Follow for update