Listen to the Layers: Mitigating Hallucinations with Inter-Layer Disagreement
About
Pretrained Large Language Models (LLMs) are prone to generating fluent yet factually incorrect text-a phenomenon known as hallucinations, undermining their reliability and utility in downstream tasks. We hypothesize that a generated text span's factuality is correlated with its representational instability across the model's internal layers. Based on this, we propose the CoCoA (Confusion and Consistency Aware) decoder, a novel, training-free decoding algorithm that mitigates hallucinations at inference time by listening to these signals in the middle layers. We propose two metrics to quantify this instability in the middle layers, and use it to penalize outputs that exhibit high internal confusion, thereby steering the model towards more internally consistent and factually grounded outputs. We further propose a self-information gated variant, CoCoA-SIG, that dynamically modulates this penalty to selectively target high-surprise, unstable generations. Extensive experiments on diverse tasks, including question-answering, summarization and code generation demonstrate that CoCoA significantly improves factual correctness across multiple model families (e.g., Llama-3, Qwen-2.5, Mistral). By leveraging model-intrinsic signals, CoCoA offers an effective and broadly applicable method for enhancing the trustworthiness of LLMs at inference time, without requiring any model retraining.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Summarization | XSum (test) | -- | 231 | |
| Multiple-Choice | TruthfulQA | MC1 Accuracy52.39 | 83 | |
| Open-ended generation | TruthfulQA Without Rejected Samples open-ended (full) | Truthfulness74.67 | 39 | |
| Open-ended generation | TruthfulQA With All Samples open-ended (full) | Truthfulness81 | 39 | |
| Open-ended generation | NQ | Exact Match (EM)0.512 | 4 | |
| Open-ended generation | NQ-Swap | EM41.1 | 4 |