Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

CoopGuard: Stateful Cooperative Agents Safeguarding LLMs Against Evolving Multi-Round Attacks

About

As Large Language Models (LLMs) are increasingly deployed in complex applications, their vulnerability to adversarial attacks raises urgent safety concerns, especially those evolving over multi-round interactions. Existing defenses are largely reactive and struggle to adapt as adversaries refine strategies across rounds. In this work, we propose CoopGuard , a stateful multi-round LLM defense framework based on cooperative agents that maintains and updates an internal defense state to counter evolving attacks. It employs three specialized agents (Deferring Agent, Tempting Agent, and Forensic Agent) for complementary round-level strategies, coordinated by System Agent, which conditions decisions on the evolving defense state (interaction history) and orchestrates agents over time. To evaluate evolving threats, we introduce the EMRA benchmark with 5,200 adversarial samples across 8 attack types, simulating progressively LLM multi-round attacks. Experiments show that CoopGuard reduces attack success rate by 78.9% over state-of-the-art defenses, while improving deceptive rate by 186% and reducing attack efficiency by 167.9%, offering a more comprehensive assessment of multi-round defense. These results demonstrate that CoopGuard provides robust protection for LLMs in multi-round adversarial scenarios.

Siyuan Li, Zehao Liu, Xi Lin, Qinghua Mao, Yuliang Chen, Haoyu Li, Jun Wu, Jianhua Li, Xiu Su• 2026

Related benchmarks

TaskDatasetResultRank
Deceptive DefenseEMRA (test)
MTA (Average)0.387
18
Jailbreak Defense EvaluationEMRA MTA
ASR1.1
18
Jailbreak Defense EvaluationEMRA RQ
ASR2.3
18
Jailbreak Defense EvaluationEMRA HQ
ASR0.00e+0
18
Jailbreak Defense EvaluationEMRA JQ
Attack Success Rate (ASR)1
18
Showing 5 of 5 rows

Other info

Follow for update