To trust or not to trust: Attention-based Trust Management for LLM Multi-Agent Systems
About
Large Language Model-based Multi-Agent Systems (LLM-MAS) have demonstrated strong capabilities in solving complex tasks but remain vulnerable when agents receive unreliable messages. This vulnerability stems from a fundamental gap: LLM agents treat all incoming messages equally without evaluating their trustworthiness. While some existing studies approach trustworthiness, they focus on a single type of harmfulness rather than analyze it in a holistic approach from multiple trustworthiness perspectives. We address this gap by proposing a comprehensive definition of trustworthiness inspired by human communication theory (Grice, 1975). Our definition identifies six orthogonal trust dimensions that provide interpretable measures of trustworthiness. Building on this definition, we introduce the Attention Trust Score (A -Trust), a lightweight, attention-based method for evaluating the trustworthiness of messages. We then develop a principled trust management system (TMS) for LLM -MAS that supports both message-level and agent-level trust assessments. Experiments across diverse multi-agent settings and tasks demonstrate that our TMS significantly improves robustness against malicious inputs.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Code Generation | MBPP (test) | -- | 298 | |
| Physics Question Answering | MMLU phy | ASR (Accuracy)51.4 | 48 | |
| Multitask Language Understanding | MMLU Physics | MDR90.1 | 45 | |
| Math Reasoning | MATH 500 | MDR90.4 | 15 | |
| Program synthesis | MBPP | ASR45.7 | 12 | |
| Mathematical Problem Solving | MATH | Clean Error44.6 | 6 | |
| Commonsense Reasoning | StrategyQA | Clean Error34.9 | 6 |