Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Council Mode: Mitigating Hallucination and Bias in LLMs via Multi-Agent Consensus

About

Large Language Models (LLMs), particularly those employing Mixture-of-Experts (MoE) architectures, have achieved remarkable capabilities across diverse natural language processing tasks. However, these models frequently suffer from hallucinations -- generating plausible but factually incorrect content -- and exhibit systematic biases that are amplified by uneven expert activation during inference. In this paper, we propose the Council Mode, a novel multi-agent consensus framework that addresses these limitations by dispatching queries to multiple heterogeneous frontier LLMs in parallel and synthesizing their outputs through a dedicated consensus model. The Council pipeline operates in three phases: (1) an intelligent triage classifier that routes queries based on complexity, (2) parallel expert generation across architecturally diverse models, and (3) a structured consensus synthesis that explicitly identifies agreement, disagreement, and unique findings before producing the final response. We implement and evaluate this architecture within an open-source AI workspace. Our comprehensive evaluation across multiple benchmarks demonstrates that the Council Mode achieves a 35.9% relative reduction in hallucination rates on the HaluEval benchmark and a 7.8-point improvement on TruthfulQA compared to the best-performing individual model, while maintaining significantly lower bias variance across domains. We provide the mathematical formulation of the consensus mechanism, detail the system architecture, and present extensive empirical results with ablation studies.

Shuai Wu, Xue Li, Yanna Feng, Yufang Li, Zhijun Wang• 2026

Related benchmarks

TaskDatasetResultRank
Hallucination DetectionHaluEval--
15
Truthfulness EvaluationTruthfulQA
TruthfulQA Score82.6
12
Bias EvaluationConsolidated Evaluation Dimensions
Bias σ²0.003
6
Quality AssessmentConsolidated Evaluation Dimensions
Quality Score91.7
6
ReasoningMulti-Domain Reasoning
Accuracy83.9
6
Hallucination EvaluationHaluEval
Average Score10.7
6
Showing 6 of 6 rows

Other info

Follow for update