Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Among Us: Measuring and Mitigating Malicious Contributions in Model Collaboration Systems

About

Language models (LMs) are increasingly used in collaboration: multiple LMs trained by different parties collaborate through routing systems, multi-agent debate, model merging, and more. Critical safety risks remain in this decentralized paradigm: what if some of the models in multi-LLM systems are compromised or malicious? We first quantify the impact of malicious models by engineering four categories of malicious LMs, plug them into four types of popular model collaboration systems, and evaluate the compromised system across 10 datasets. We find that malicious models have a severe impact on the multi-LLM systems, especially for reasoning and safety domains where performance is lowered by 7.12% and 7.94% on average. We then propose mitigation strategies to alleviate the impact of malicious components, by employing external supervisors that oversee model collaboration to disable/mask them out to reduce their influence. On average, these strategies recover 95.31% of the initial performance, while making model collaboration systems fully resistant to malicious models remains an open research question.

Ziyuan Yang, Wenxuan Ding, Shangbin Feng, Yulia Tsvetkov• 2026

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval
Pass@189.3
850
Multi-task Language UnderstandingMMLU
Accuracy64.8
842
Mathematical ReasoningGSM8K
Accuracy75
212
Instruction FollowingIFBench
Pass@1 (Strict)20.8
68
Safety EvaluationCocoNot
Safety Score0.613
36
Showing 5 of 5 rows

Other info

Follow for update