Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

AmpleGCG: Learning a Universal and Transferable Generative Model of Adversarial Suffixes for Jailbreaking Both Open and Closed LLMs

About

As large language models (LLMs) become increasingly prevalent and integrated into autonomous systems, ensuring their safety is imperative. Despite significant strides toward safety alignment, recent work GCG~\citep{zou2023universal} proposes a discrete token optimization algorithm and selects the single suffix with the lowest loss to successfully jailbreak aligned LLMs. In this work, we first discuss the drawbacks of solely picking the suffix with the lowest loss during GCG optimization for jailbreaking and uncover the missed successful suffixes during the intermediate steps. Moreover, we utilize those successful suffixes as training data to learn a generative model, named AmpleGCG, which captures the distribution of adversarial suffixes given a harmful query and enables the rapid generation of hundreds of suffixes for any harmful queries in seconds. AmpleGCG achieves near 100\% attack success rate (ASR) on two aligned LLMs (Llama-2-7B-chat and Vicuna-7B), surpassing two strongest attack baselines. More interestingly, AmpleGCG also transfers seamlessly to attack different models, including closed-source LLMs, achieving a 99\% ASR on the latest GPT-3.5. To summarize, our work amplifies the impact of GCG by training a generative model of adversarial suffixes that is universal to any harmful queries and transferable from attacking open-source LLMs to closed-source LLMs. In addition, it can generate 200 adversarial suffixes for one harmful query in only 4 seconds, rendering it more challenging to defend.

Zeyi Liao, Huan Sun• 2024

Related benchmarks

TaskDatasetResultRank
Jailbreak AttackAdvBench 150 Harmful Behaviors
ASR28
45
Jailbreaking AttackAdvBench--
27
Jailbreak AttackHarmBench example-based Llama3 8B
Attack Success Rate6
17
Jailbreak AttackHarmBench target: GLM4-9B
ASR6.5
11
Jailbreak AttackHarmBench target: Qwen2.5-7B
ASR8
11
Jailbreak AttackHarmBench target: Llama-3.1-8B
Attack Success Rate (ASR)4
11
Jailbreak AttackHarmBench target: Llama-3.2-3B
ASR3
11
Jailbreak AttackHarmBench target: Phi-4-Mini
ASR5
11
Goal HijackingTarget Response Dataset Llama-2 targets (test)
ASR (threatening)0.00e+0
9
Showing 9 of 9 rows

Other info

Follow for update