Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Multi-Agent Debate with Memory Masking

About

Large language models (LLMs) have recently demonstrated impressive capabilities in reasoning tasks. Currently, mainstream LLM reasoning frameworks predominantly focus on scaling up inference-time sampling to enhance performance. In particular, among all LLM reasoning frameworks, *multi-agent debate* (MAD), which employs multiple LLMs as agents to perform reasoning in the way of multi-round debate, has emerged as a powerful reasoning paradigm since it allows agents to access previous memories to alleviate fallacious content and refine their reasoning iteratively in each debate round. However, although MAD significantly improves the reasoning capabilities of LLMs, in this paper, we observe that there remain erroneous memories, and LLM agents are vulnerable to these erroneous memories. To explore this phenomenon, we provide a theoretical insight that the performance of MAD is highly dependent on the quality of memories derived from the previous debate, indicating that the existence of erroneous memories poses a threat to the performance of MAD. To address this problem, we introduce a simple yet effective multi-agent debate framework, *multi-agent debate with memory masking* (MAD-M$^2$), to improve the robustness of MAD by allowing LLM agents to mask erroneous memories from the previous debate round at the beginning of each debate round. In this way, MAD-M$^2$ can polish the contextual information before each debate round by preserving informative and meaningful memories while discarding the erroneous memories. Extensive experiments and analyses on mainstream mathematical and logical reasoning benchmarks demonstrate that MAD-M$^2$ can identify the erroneous memories and achieve better performance in reasoning than MAD.

Hongduan Tian, Xiao Feng, Ziyuan Zhao, Xiangyu Zhu, Rolan Yan, Bo Han• 2026

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningCSQA
CSQA Accuracy78.1
126
Logical reasoningFormal Logic
Accuracy58.7
106
Arithmetic ReasoningArithmetics
Accuracy91.3
106
Grade School Math ReasoningGSM8K
Accuracy (GSM8K)81.7
77
Medical ReasoningProfessional Medicine
Accuracy73
56
Helpful and Harmless Preference ReasoningHH-RLHF
Accuracy52.1
56
Mathematical ReasoningGSM8K
Accuracy97.8
20
Language UnderstandingMMLU-Pro
Accuracy75.8
20
Mathematical ReasoningAIME 25
Accuracy76.7
20
Mathematical ReasoningAIME24
Accuracy80
20
Showing 10 of 11 rows

Other info

Follow for update