Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Jailbreak-Zero: A Path to Pareto Optimal Red Teaming for Large Language Models

About

This paper introduces Jailbreak-Zero, a novel red teaming methodology that shifts the paradigm of Large Language Model (LLM) safety evaluation from a constrained example-based approach to a more expansive and effective policy-based framework. By leveraging an attack LLM to generate a high volume of diverse adversarial prompts and then fine-tuning this attack model with a preference dataset, Jailbreak-Zero achieves Pareto optimality across the crucial objectives of policy coverage, attack strategy diversity, and prompt fidelity to real user inputs. The empirical evidence demonstrates the superiority of this method, showcasing significantly higher attack success rates against both open-source and proprietary models like GPT-40 and Claude 3.5 when compared to existing state-of-the-art techniques. Crucially, Jailbreak-Zero accomplishes this while producing human-readable and effective adversarial prompts with minimal need for human intervention, thereby presenting a more scalable and comprehensive solution for identifying and mitigating the safety vulnerabilities of LLMs.

Kai Hu, Abhinav Aggarwal, Mehran Khodabandeh, David Zhang, Eric Hsin, Li Chen, Ankit Jain, Matt Fredrikson, Akash Bharadwaj• 2025

Related benchmarks

TaskDatasetResultRank
JailbreakAdvBench Ensemble configuration GPT-4o
Attack Success Rate (ASR)99.5
25
Jailbreak AttackClaude 3.5
ASR96
10
Jailbreak AttackHarmBench example-based Llama2 7B
Attack Success Rate (ASR)78
6
Jailbreak AttackHarmBench example-based Llama3 8B
Attack Success Rate100
6
Jailbreak AttackHarmBench example-based Llama3 RR (8B)
Attack Success Rate83
6
Showing 5 of 5 rows

Other info

Follow for update