Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

One Model Transfer to All: On Robust Jailbreak Prompts Generation against LLMs

About

Safety alignment in large language models (LLMs) is increasingly compromised by jailbreak attacks, which can manipulate these models to generate harmful or unintended content. Investigating these attacks is crucial for uncovering model vulnerabilities. However, many existing jailbreak strategies fail to keep pace with the rapid development of defense mechanisms, such as defensive suffixes, rendering them ineffective against defended models. To tackle this issue, we introduce a novel attack method called ArrAttack, specifically designed to target defended LLMs. ArrAttack automatically generates robust jailbreak prompts capable of bypassing various defense measures. This capability is supported by a universal robustness judgment model that, once trained, can perform robustness evaluation for any target model with a wide variety of defenses. By leveraging this model, we can rapidly develop a robust jailbreak prompt generator that efficiently converts malicious input prompts into effective attacks. Extensive evaluations reveal that ArrAttack significantly outperforms existing attack strategies, demonstrating strong transferability across both white-box and black-box models, including GPT-4 and Claude-3. Our work bridges the gap between jailbreak attacks and defenses, providing a fresh perspective on generating robust jailbreak prompts. We make the codebase available at https://github.com/LLBao/ArrAttack.

Linbao Li, Yannan Liu, Daojing He, Yu Li• 2025

Related benchmarks

TaskDatasetResultRank
Jailbreak AttackHarmBench
Attack Success Rate (ASR)69.5
487
JailbreakingAdvBench (test)
ASR (GPT-4o)93
27
JailbreakingHarmBench (test)
ASR (GPT-4o)91
27
JailbreakingStrongReject (test)
ASR (GPT-4o)90
27
JailbreakingJBB-Behaviors (test)
ASR (GPT-4o)94
27
JailbreakAdvBench
ASR (GPT-4o)93.4
12
JailbreakJBB-Behaviors
ASR (GPT-4o)94.6
12
JailbreakStrongREJECT
ASR (GPT-4o)90.9
12
Jailbreak attack success rateAdvBench LLaMA-2-7B-Chat
ASR (SMO, GPT-4o)34
5
Jailbreak attack success rateAdvBench Phi-3 Medium 14B Instruct
ASR (SMO, GPT-4o)36
5
Showing 10 of 13 rows

Other info

Follow for update