Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Spilled Energy in Large Language Models

About

We reinterpret the final Large Language Model (LLM) softmax classifier as an Energy-Based Model (EBM), decomposing the sequence-to-sequence probability chain into multiple interacting EBMs at inference. This principled approach allows us to track "energy spills" during decoding, which we empirically show correlate with factual errors, biases, and failures. Similar to Orgad et al. (2025), our method localizes the exact answer token and subsequently tests for hallucinations. Crucially, however, we achieve this without requiring trained probe classifiers or activation ablations. Instead, we introduce two completely training-free metrics derived directly from output logits: spilled energy, which captures the discrepancy between energy values across consecutive generation steps that should theoretically match, and marginalized energy, which is measurable at a single step. Evaluated on nine benchmarks across state-of-the-art LLMs (including LLaMA, Mistral, and Gemma) and on synthetic algebraic operations (Qwen3), our approach demonstrates robust, competitive hallucination detection and cross-task generalization. Notably, these results hold for both pretrained and instruction-tuned variants without introducing any training overhead.

Adrian Robert Minut, Hazem Dewidar, Iacopo Masi• 2026

Related benchmarks

TaskDatasetResultRank
Hallucination DetectionTriviaQA (test)
AUC-ROC89.01
169
Hallucination DetectionHotpotQA (test)
AuROC0.9757
17
Hallucination DetectionMATH (test)
AuROC98.8
12
Hallucination DetectionPool (test)
AuROC0.8598
12
Hallucination DetectionIMBD (test)
AuROC0.5458
10
Hallucination DetectionMovies (test)
AuROC94.15
10
Hallucination DetectionWinoGrande (test)
AuROC52.48
10
Hallucination DetectionMNLI (test)
AuROC100
10
Hallucination DetectionHotpotQA-WC (test)
AuROC0.8595
10
Hallucination DetectionWinoBias (test)
AUROC53.53
10
Showing 10 of 10 rows

Other info

Follow for update