Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Dynamic Generation of Multi-LLM Agents Communication Topologies with Graph Diffusion Models

About

The efficiency of multi-agent systems driven by large language models (LLMs) largely hinges on their communication topology. However, designing an optimal topology is a non-trivial challenge, as it requires balancing competing objectives such as task performance, communication cost, and robustness. Existing frameworks often rely on static or hand-crafted topologies, which inherently fail to adapt to diverse task requirements, leading to either excessive token consumption for simple problems or performance bottlenecks for complex ones. To address this challenge, we introduce a novel generative framework called \textit{Guided Topology Diffusion (GTD)}. Inspired by conditional discrete graph diffusion models, GTD formulates topology synthesis as an iterative construction process. At each step, the generation is steered by a lightweight proxy model that predicts multi-objective rewards (e.g., accuracy, utility, cost), enabling real-time, gradient-free optimization towards task-adaptive topologies. This iterative, guided synthesis process distinguishes GTD from single-step generative frameworks, enabling it to better navigate complex design trade-offs. We validated GTD across multiple benchmarks, and experiments show that this framework can generate highly task-adaptive, sparse, and efficient communication topologies, significantly outperforming existing methods in LLM agent collaboration.

Eric Hanchen Jiang, Guancheng Wan, Sophia Yin, Mengting Li, Yuchen Wu, Xiao Liang, Xinfeng Li, Yizhou Sun, Wei Wang, Kai-Wei Chang, Ying Nian Wu• 2025

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval
Accuracy92.68
99
ReasoningMMLU-Pro
Accuracy87.14
95
MathematicsAIME25
Accuracy40
63
Code GenerationLiveCodeBench v6
Accuracy97.75
58
MathematicsBeyond
Accuracy35
26
MathematicsHMMT
Accuracy43.33
26
MathematicsAIME 26
Accuracy46.67
26
Mathematical ReasoningBeyond AIME
Total Tokens1.17
10
Code GenerationLiveCode Bench
Total Tokens3.12
10
Multi-task Language UnderstandingMMLU-Pro
Performance85.71
10
Showing 10 of 10 rows

Other info

Follow for update