Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Latent Debate: A Surrogate Framework for Interpreting LLM Thinking

About

Understanding the internal thinking process of Large Language Models (LLMs) and the cause of hallucinations remains a key challenge. To this end, we introduce latent debate, a novel framework for interpreting model predictions through the lens of implicit internal arguments. Unlike the current work of self-consistency and multi-agent debate, which relies on explicit debates among multiple answers or multiple models, latent debate captures the hidden supporting and attacking signals that arise within a single model during a single inference. We first present a model- and task-agnostic conceptual framework, and then instantiate it symbolically to approximate the thinking process of LLMs on True/False prediction tasks. Empirical studies demonstrate that latent debate is a faithful structured surrogate model that has highly consistent predictions with the original LLM. Beyond interpretability, we demonstrate that latent debate provides a strong baseline for hallucination detection. Further analysis reveals strong correlations between hallucinations and debate patterns, such as a high degree of latent debates in the middle layers is linked to a higher risk of hallucinations. These findings position latent debate as a potential framework for understanding internal mechanisms of LLMs, especially for scenarios where internal (dis)agreements appear during the inference steps.

Lihu Chen, Xiang Yin, Francesca Toni• 2025

Related benchmarks

TaskDatasetResultRank
Hallucination DetectionTriviaQA
AUROC0.93
265
Hallucination DetectionCompany
AUC-ROC0.93
68
Hallucination DetectionTruthfulQA
AUC (ROC)0.62
47
Hallucination Detectioncommon claim
AUROC84
8
Hallucination DetectionCounterFact
AUROC0.84
8
Hallucination DetectionMuSiQue
AUROC0.77
8
Showing 6 of 6 rows

Other info

Follow for update