Latent Debate: A Surrogate Framework for Interpreting LLM Thinking
About
Understanding the internal thinking process of Large Language Models (LLMs) and the cause of hallucinations remains a key challenge. To this end, we introduce latent debate, a novel framework for interpreting model predictions through the lens of implicit internal arguments. Unlike the current work of self-consistency and multi-agent debate, which relies on explicit debates among multiple answers or multiple models, latent debate captures the hidden supporting and attacking signals that arise within a single model during a single inference. We first present a model- and task-agnostic conceptual framework, and then instantiate it symbolically to approximate the thinking process of LLMs on True/False prediction tasks. Empirical studies demonstrate that latent debate is a faithful structured surrogate model that has highly consistent predictions with the original LLM. Beyond interpretability, we demonstrate that latent debate provides a strong baseline for hallucination detection. Further analysis reveals strong correlations between hallucinations and debate patterns, such as a high degree of latent debates in the middle layers is linked to a higher risk of hallucinations. These findings position latent debate as a potential framework for understanding internal mechanisms of LLMs, especially for scenarios where internal (dis)agreements appear during the inference steps.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Hallucination Detection | TriviaQA | AUROC0.93 | 265 | |
| Hallucination Detection | Company | AUC-ROC0.93 | 68 | |
| Hallucination Detection | TruthfulQA | AUC (ROC)0.62 | 47 | |
| Hallucination Detection | common claim | AUROC84 | 8 | |
| Hallucination Detection | CounterFact | AUROC0.84 | 8 | |
| Hallucination Detection | MuSiQue | AUROC0.77 | 8 |