Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Red-Teaming LLM Multi-Agent Systems via Communication Attacks

About

Large Language Model-based Multi-Agent Systems (LLM-MAS) have revolutionized complex problem-solving capability by enabling sophisticated agent collaboration through message-based communications. While the communication framework is crucial for agent coordination, it also introduces a critical yet unexplored security vulnerability. In this work, we introduce Agent-in-the-Middle (AiTM), a novel attack that exploits the fundamental communication mechanisms in LLM-MAS by intercepting and manipulating inter-agent messages. Unlike existing attacks that compromise individual agents, AiTM demonstrates how an adversary can compromise entire multi-agent systems by only manipulating the messages passing between agents. To enable the attack under the challenges of limited control and role-restricted communication format, we develop an LLM-powered adversarial agent with a reflection mechanism that generates contextually-aware malicious instructions. Our comprehensive evaluation across various frameworks, communication structures, and real-world applications demonstrates that LLM-MAS is vulnerable to communication-based attacks, highlighting the need for robust security measures in multi-agent systems.

Pengfei He, Yupin Lin, Shen Dong, Han Xu, Yue Xing, Hui Liu• 2025

Related benchmarks

TaskDatasetResultRank
Target behavior attackMMLU bio
Attack Success Rate (ASR)98.4
16
Target behavior attackMMLU phy
ASR99.3
16
Target behavior attackHumanEval
ASR98.3
16
Target behavior attackMBPP
ASR99.2
16
Denial of Service (DoS) AttackHumanEval
ASR63.8
8
Denial of Service (DoS) AttackMBPP
ASR87.8
8
Target behavior attackSoftwareDev
Attack Success Rate1
8
Showing 7 of 7 rows

Other info

Follow for update