Red-Teaming LLM Multi-Agent Systems via Communication Attacks
About
Large Language Model-based Multi-Agent Systems (LLM-MAS) have revolutionized complex problem-solving capability by enabling sophisticated agent collaboration through message-based communications. While the communication framework is crucial for agent coordination, it also introduces a critical yet unexplored security vulnerability. In this work, we introduce Agent-in-the-Middle (AiTM), a novel attack that exploits the fundamental communication mechanisms in LLM-MAS by intercepting and manipulating inter-agent messages. Unlike existing attacks that compromise individual agents, AiTM demonstrates how an adversary can compromise entire multi-agent systems by only manipulating the messages passing between agents. To enable the attack under the challenges of limited control and role-restricted communication format, we develop an LLM-powered adversarial agent with a reflection mechanism that generates contextually-aware malicious instructions. Our comprehensive evaluation across various frameworks, communication structures, and real-world applications demonstrates that LLM-MAS is vulnerable to communication-based attacks, highlighting the need for robust security measures in multi-agent systems.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Target behavior attack | MMLU bio | Attack Success Rate (ASR)98.4 | 16 | |
| Target behavior attack | MMLU phy | ASR99.3 | 16 | |
| Target behavior attack | HumanEval | ASR98.3 | 16 | |
| Target behavior attack | MBPP | ASR99.2 | 16 | |
| Denial of Service (DoS) Attack | HumanEval | ASR63.8 | 8 | |
| Denial of Service (DoS) Attack | MBPP | ASR87.8 | 8 | |
| Target behavior attack | SoftwareDev | Attack Success Rate1 | 8 |