Jailbreak-Zero: A Path to Pareto Optimal Red Teaming for Large Language Models
About
This paper introduces Jailbreak-Zero, a novel red teaming methodology that shifts the paradigm of Large Language Model (LLM) safety evaluation from a constrained example-based approach to a more expansive and effective policy-based framework. By leveraging an attack LLM to generate a high volume of diverse adversarial prompts and then fine-tuning this attack model with a preference dataset, Jailbreak-Zero achieves Pareto optimality across the crucial objectives of policy coverage, attack strategy diversity, and prompt fidelity to real user inputs. The empirical evidence demonstrates the superiority of this method, showcasing significantly higher attack success rates against both open-source and proprietary models like GPT-40 and Claude 3.5 when compared to existing state-of-the-art techniques. Crucially, Jailbreak-Zero accomplishes this while producing human-readable and effective adversarial prompts with minimal need for human intervention, thereby presenting a more scalable and comprehensive solution for identifying and mitigating the safety vulnerabilities of LLMs.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Jailbreak | AdvBench Ensemble configuration GPT-4o | Attack Success Rate (ASR)99.5 | 25 | |
| Jailbreak Attack | Claude 3.5 | ASR96 | 10 | |
| Jailbreak Attack | HarmBench example-based Llama2 7B | Attack Success Rate (ASR)78 | 6 | |
| Jailbreak Attack | HarmBench example-based Llama3 8B | Attack Success Rate100 | 6 | |
| Jailbreak Attack | HarmBench example-based Llama3 RR (8B) | Attack Success Rate83 | 6 |