Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

AgentDropout: Dynamic Agent Elimination for Token-Efficient and High-Performance LLM-Based Multi-Agent Collaboration

About

Multi-agent systems (MAS) based on large language models (LLMs) have demonstrated significant potential in collaborative problem-solving. However, they still face substantial challenges of low communication efficiency and suboptimal task performance, making the careful design of the agents' communication topologies particularly important. Inspired by the management theory that roles in an efficient team are often dynamically adjusted, we propose AgentDropout, which identifies redundant agents and communication across different communication rounds by optimizing the adjacency matrices of the communication graphs and eliminates them to enhance both token efficiency and task performance. Compared to state-of-the-art methods, AgentDropout achieves an average reduction of 21.6% in prompt token consumption and 18.4% in completion token consumption, along with a performance improvement of 1.14 on the tasks. Furthermore, the extended experiments demonstrate that AgentDropout achieves notable domain transferability and structure robustness, revealing its reliability and effectiveness. We release our code at https://github.com/wangzx1219/AgentDropout.

Zhexuan Wang, Yutong Wang, Xuebo Liu, Liang Ding, Miao Zhang, Jie Liu, Min Zhang• 2025

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval
Pass@193.15
1036
Language UnderstandingMMLU
Accuracy85.62
825
Multitask Language UnderstandingMMLU
Accuracy85.62
413
Mathematical ReasoningSVAMP
Accuracy91.04
403
Mathematical ReasoningAIME
AIME Accuracy78.13
288
Arithmetic ReasoningMultiArith
Accuracy95.6
229
Mathematical ReasoningAQUA
Accuracy80.94
146
Mathematical ReasoningMultiArith
Accuracy100
143
Multiple-choice Question AnsweringMMLU-Pro
MMLU-Pro Overall Accuracy82.37
119
Code GenerationHumanEval
Accuracy91.46
99
Showing 10 of 21 rows

Other info

Follow for update