Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Chain of Attack: a Semantic-Driven Contextual Multi-Turn attacker for LLM

About

Large language models (LLMs) have achieved remarkable performance in various natural language processing tasks, especially in dialogue systems. However, LLM may also pose security and moral threats, especially in multi round conversations where large models are more easily guided by contextual content, resulting in harmful or biased responses. In this paper, we present a novel method to attack LLMs in multi-turn dialogues, called CoA (Chain of Attack). CoA is a semantic-driven contextual multi-turn attack method that adaptively adjusts the attack policy through contextual feedback and semantic relevance during multi-turn of dialogue with a large model, resulting in the model producing unreasonable or harmful content. We evaluate CoA on different LLMs and datasets, and show that it can effectively expose the vulnerabilities of LLMs, and outperform existing attack methods. Our work provides a new perspective and tool for attacking and defending LLMs, and contributes to the security and ethical assessment of dialogue systems.

Xikang Yang, Xuehai Tang, Songlin Hu, Jizhong Han• 2024

Related benchmarks

TaskDatasetResultRank
Jailbreak AttackHarmBench
Attack Success Rate (ASR)86.1
376
Jailbreak AttackAdvBench
AASR96.5
247
JailbreakAdvBench
Avg Queries14.3
63
Jailbreak AttackJailbreakBench
ASR78.33
54
JailbreakingAdvBench--
44
Transferable Adversarial AttackAdvBench LLM Classifier (test)
TASR@16.32e+3
39
Transferable Adversarial AttackHarmBench Classifier (test)
TASR@168.4
37
Jailbreak AttackAdvBench GPT-3.5-turbo 1.0 (test)
Attack Success Rate70.9
22
Jailbreak AttackRedTeam 2K
ASR68.33
16
Jailbreak AttackJailbreak Evaluation GPT-4o-mini
ASR73.33
13
Showing 10 of 19 rows

Other info

Follow for update