Towards Understanding Jailbreak Attacks in LLMs: A Representation Space Analysis
About
Large language models (LLMs) are susceptible to a type of attack known as jailbreaking, which misleads LLMs to output harmful contents. Although there are diverse jailbreak attack strategies, there is no unified understanding on why some methods succeed and others fail. This paper explores the behavior of harmful and harmless prompts in the LLM's representation space to investigate the intrinsic properties of successful jailbreak attacks. We hypothesize that successful attacks share some similar properties: They are effective in moving the representation of the harmful prompt towards the direction to the harmless prompts. We leverage hidden representations into the objective of existing jailbreak attacks to move the attacks along the acceptance direction, and conduct experiments to validate the above hypothesis using the proposed objective. We hope this study provides new insights into understanding how LLMs understand harmfulness information.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Jailbreak Attack Transferability | Gemma-7b-it finetuned variants v1 (test) | TSR41.2 | 16 | |
| Jailbreak Attack | Gemma-7b five finetuned variants | Average ASR41.6 | 16 | |
| Jailbreak Attack Transferability | Llama-2-7b-chat finetuned variants v1 (test) | Transfer Success Rate (TSR)24.8 | 16 | |
| Jailbreak Attack Transferability | Llama-3-8b-Instruct finetuned variants v1 (test) | TSR16.4 | 16 | |
| Jailbreak Attack Transferability | DeepSeek-llm-7b-chat finetuned variants v1 (test) | TSR58 | 16 | |
| Jailbreak Attack | Llama2-7b five finetuned variants | Average ASR24.8 | 16 | |
| Jailbreak Attack | LLaMA3-8B | Average ASR16.4 | 16 | |
| Jailbreak Attack | DeepSeek-7b five finetuned variants | Average ASR59.2 | 16 | |
| Jailbreak Attack | deepseek-7b v1 (pretrained) | ASR (%)96 | 13 | |
| Jailbreak Attack | llama2-7b v1 (pretrained) | ASR0.64 | 13 |