FlipAttack: Jailbreak LLMs via Flipping
About
This paper proposes a simple yet effective jailbreak attack named FlipAttack against black-box LLMs. First, from the autoregressive nature, we reveal that LLMs tend to understand the text from left to right and find that they struggle to comprehend the text when noise is added to the left side. Motivated by these insights, we propose to disguise the harmful prompt by constructing left-side noise merely based on the prompt itself, then generalize this idea to 4 flipping modes. Second, we verify the strong ability of LLMs to perform the text-flipping task, and then develop 4 variants to guide LLMs to denoise, understand, and execute harmful behaviors accurately. These designs keep FlipAttack universal, stealthy, and simple, allowing it to jailbreak black-box LLMs within only 1 query. Experiments on 8 LLMs demonstrate the superiority of FlipAttack. Remarkably, it achieves $\sim$98\% attack success rate on GPT-4o, and $\sim$98\% bypass rate against 5 guardrail models on average. The codes are available at GitHub\footnote{https://github.com/yueliu1999/FlipAttack}.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Jailbreak Attack | HarmBench | -- | 376 | |
| Persona Manipulation | BFI (test) | Success Score90.68 | 72 | |
| Persona Manipulation | MPI (test) | Success Score78.54 | 72 | |
| Persona Manipulation | ANTHR (test) | Success Score85.83 | 72 | |
| Jailbreak Attack | JailbreakBench (JBB) | -- | 54 | |
| Jailbreak Attack | AdvBench 50 | ASR (KW)100 | 48 | |
| Jailbreak Attack | ShadowRisk | ASR-KW100 | 48 | |
| Jailbreaking | AdvBench | -- | 44 | |
| Transferable Adversarial Attack | AdvBench LLM Classifier (test) | TASR@13.33e+3 | 39 | |
| Transferable Adversarial Attack | HarmBench Classifier (test) | TASR@133.3 | 37 |