Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

To trust or not to trust: Attention-based Trust Management for LLM Multi-Agent Systems

About

Large Language Model-based Multi-Agent Systems (LLM-MAS) have demonstrated strong capabilities in solving complex tasks but remain vulnerable when agents receive unreliable messages. This vulnerability stems from a fundamental gap: LLM agents treat all incoming messages equally without evaluating their trustworthiness. While some existing studies approach trustworthiness, they focus on a single type of harmfulness rather than analyze it in a holistic approach from multiple trustworthiness perspectives. We address this gap by proposing a comprehensive definition of trustworthiness inspired by human communication theory (Grice, 1975). Our definition identifies six orthogonal trust dimensions that provide interpretable measures of trustworthiness. Building on this definition, we introduce the Attention Trust Score (A -Trust), a lightweight, attention-based method for evaluating the trustworthiness of messages. We then develop a principled trust management system (TMS) for LLM -MAS that supports both message-level and agent-level trust assessments. Experiments across diverse multi-agent settings and tasks demonstrate that our TMS significantly improves robustness against malicious inputs.

Pengfei He, Zhenwei Dai, Xianfeng Tang, Yue Xing, Hui Liu, Jingying Zeng, Qiankun Peng, Shrivats Agrawal, Samarth Varshney, Suhang Wang, Jiliang Tang, Qi He• 2025

Related benchmarks

TaskDatasetResultRank
Code GenerationMBPP (test)--
298
Physics Question AnsweringMMLU phy
ASR (Accuracy)51.4
48
Multitask Language UnderstandingMMLU Physics
MDR90.1
45
Math ReasoningMATH 500
MDR90.4
15
Program synthesisMBPP
ASR45.7
12
Mathematical Problem SolvingMATH
Clean Error44.6
6
Commonsense ReasoningStrategyQA
Clean Error34.9
6
Showing 7 of 7 rows

Other info

Follow for update