Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Improved Generation of Adversarial Examples Against Safety-aligned LLMs

About

Adversarial prompts generated using gradient-based methods exhibit outstanding performance in performing automatic jailbreak attacks against safety-aligned LLMs. Nevertheless, due to the discrete nature of texts, the input gradient of LLMs struggles to precisely reflect the magnitude of loss change that results from token replacements in the prompt, leading to limited attack success rates against safety-aligned LLMs, even in the white-box setting. In this paper, we explore a new perspective on this problem, suggesting that it can be alleviated by leveraging innovations inspired in transfer-based attacks that were originally proposed for attacking black-box image classification models. For the first time, we appropriate the ideologies of effective methods among these transfer-based attacks, i.e., Skip Gradient Method and Intermediate Level Attack, into gradient-based adversarial prompt generation and achieve significant performance gains without introducing obvious computational cost. Meanwhile, by discussing mechanisms behind the gains, new insights are drawn, and proper combinations of these methods are also developed. Our empirical results show that 87% of the query-specific adversarial suffixes generated by the developed combination can induce Llama-2-7B-Chat to produce the output that exactly matches the target string on AdvBench. This match rate is 33% higher than that of a very strong baseline known as GCG, demonstrating advanced discrete optimization for adversarial prompt generation against LLMs. In addition, without introducing obvious cost, the combination achieves >30% absolute increase in attack success rates compared with GCG when generating both query-specific (38% -> 68%) and universal adversarial prompts (26.68% -> 60.32%) for attacking the Llama-2-7B-Chat model on AdvBench. Code at: https://github.com/qizhangli/Gradient-based-Jailbreak-Attacks.

Qizhang Li, Yiwen Guo, Wangmeng Zuo, Hao Chen• 2024

Related benchmarks

TaskDatasetResultRank
Jailbreak AttackAdvBench
AASR45.2
247
Adversarial AttackAdvBench (query-specific)
MR52
20
Jailbreak AttackDeepSeek-7b five finetuned variants
Average ASR11
16
Jailbreak Attack TransferabilityLlama-3-8b-Instruct finetuned variants v1 (test)
TSR13
16
Jailbreak AttackLlama2-7b five finetuned variants
Average ASR16
16
Jailbreak Attack TransferabilityLlama-2-7b-chat finetuned variants v1 (test)
Transfer Success Rate (TSR)16
16
Jailbreak AttackLLaMA3-8B
Average ASR13
16
Jailbreak AttackGemma-7b five finetuned variants
Average ASR4.4
16
Jailbreak Attack TransferabilityDeepSeek-llm-7b-chat finetuned variants v1 (test)
TSR11
16
Jailbreak Attack TransferabilityGemma-7b-it finetuned variants v1 (test)
TSR4.4
16
Showing 10 of 16 rows

Other info

Code

Follow for update