Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Towards Understanding Jailbreak Attacks in LLMs: A Representation Space Analysis

About

Large language models (LLMs) are susceptible to a type of attack known as jailbreaking, which misleads LLMs to output harmful contents. Although there are diverse jailbreak attack strategies, there is no unified understanding on why some methods succeed and others fail. This paper explores the behavior of harmful and harmless prompts in the LLM's representation space to investigate the intrinsic properties of successful jailbreak attacks. We hypothesize that successful attacks share some similar properties: They are effective in moving the representation of the harmful prompt towards the direction to the harmless prompts. We leverage hidden representations into the objective of existing jailbreak attacks to move the attacks along the acceptance direction, and conduct experiments to validate the above hypothesis using the proposed objective. We hope this study provides new insights into understanding how LLMs understand harmfulness information.

Yuping Lin, Pengfei He, Han Xu, Yue Xing, Makoto Yamada, Hui Liu, Jiliang Tang• 2024

Related benchmarks

TaskDatasetResultRank
Jailbreak Attack TransferabilityGemma-7b-it finetuned variants v1 (test)
TSR41.2
16
Jailbreak AttackGemma-7b five finetuned variants
Average ASR41.6
16
Jailbreak Attack TransferabilityLlama-2-7b-chat finetuned variants v1 (test)
Transfer Success Rate (TSR)24.8
16
Jailbreak Attack TransferabilityLlama-3-8b-Instruct finetuned variants v1 (test)
TSR16.4
16
Jailbreak Attack TransferabilityDeepSeek-llm-7b-chat finetuned variants v1 (test)
TSR58
16
Jailbreak AttackLlama2-7b five finetuned variants
Average ASR24.8
16
Jailbreak AttackLLaMA3-8B
Average ASR16.4
16
Jailbreak AttackDeepSeek-7b five finetuned variants
Average ASR59.2
16
Jailbreak Attackdeepseek-7b v1 (pretrained)
ASR (%)96
13
Jailbreak Attackllama2-7b v1 (pretrained)
ASR0.64
13
Showing 10 of 12 rows

Other info

Follow for update