Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Pattern Enhanced Multi-Turn Jailbreaking: Exploiting Structural Vulnerabilities in Large Language Models

About

Large language models (LLMs) remain vulnerable to multi-turn jailbreaking attacks that exploit conversational context to bypass safety constraints gradually. These attacks target different harm categories through distinct conversational approaches. Existing multi-turn methods often rely on heuristic or ad hoc exploration strategies, providing limited insight into underlying model weaknesses. The relationship between conversation patterns and model vulnerabilities across harm categories remains poorly understood. We propose Pattern Enhanced Chain of Attack (PE-CoA), a framework of five conversation patterns to construct multi-turn jailbreaks through natural dialogue. Evaluating PE-CoA on twelve LLMs spanning ten harm categories, we achieve state-of-the-art performance, uncovering pattern-specific vulnerabilities and LLM behavioral characteristics: models exhibit distinct weakness profiles, defense to one pattern does not generalize to others, and model families share similar failure modes. These findings highlight limitations of safety training and indicate the need for pattern-aware defenses. Code available on: https://github.com/Ragib-Amin-Nihal/PE-CoA

Ragib Amin Nihal, Rui Wen, Kazuhiro Nakadai, Jun Sakuma• 2025

Related benchmarks

TaskDatasetResultRank
JailbreakingDeepSeek V3.2
Attack Success Rate78.5
9
JailbreakingGPT 5.1
ASR86.5
9
JailbreakingGemini Pro 3
ASR77
9
JailbreakingClaude 4.5
ASR83
9
JailbreakingGPT-4o
ASR0.855
9
JailbreakingLlama 4
ASR0.835
9
JailbreakingQwen-Max
ASR87.5
9
JailbreakingLlama 3.1
ASR0.835
9
Showing 8 of 8 rows

Other info

Follow for update