Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Improving Multi-Agent Debate with Sparse Communication Topology

About

Multi-agent debate has proven effective in improving large language models quality for reasoning and factuality tasks. While various role-playing strategies in multi-agent debates have been explored, in terms of the communication among agents, existing approaches adopt a brute force algorithm -- each agent can communicate with all other agents. In this paper, we systematically investigate the effect of communication connectivity in multi-agent systems. Our experiments on GPT and Mistral models reveal that multi-agent debates leveraging sparse communication topology can achieve comparable or superior performance while significantly reducing computational costs. Furthermore, we extend the multi-agent debate framework to multimodal reasoning and alignment labeling tasks, showcasing its broad applicability and effectiveness. Our findings underscore the importance of communication connectivity on enhancing the efficiency and effectiveness of the "society of minds" approach.

Yunxuan Li, Yibing Du, Jiageng Zhang, Le Hou, Peter Grabowski, Yeqing Li, Eugene Ie• 2024

Related benchmarks

TaskDatasetResultRank
Question AnsweringARC Challenge--
749
Mathematical ReasoningMATH
Accuracy49.5
643
Long-context Language UnderstandingLongBench
M-Avg54.55
219
Science Question AnsweringARC-C--
127
Graduate-level Question AnsweringGPQA
Accuracy32.8
114
Question AnsweringSQuAD
Exact Match88.33
50
Language UnderstandingMMLU
RA71.67
31
Long-context UnderstandingLongBench
Average Context Length (tokens)3.95e+5
16
Mathematical ReasoningMATH
Avg Context Length (tokens)4.21e+3
16
Multi-task Language UnderstandingMMLU-Pro
Average Context Length (tokens)6.67e+3
16
Showing 10 of 16 rows

Other info

Follow for update