Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

TPA: Next Token Probability Attribution for Detecting Hallucinations in RAG

About

Detecting hallucinations in Retrieval-Augmented Generation remains a challenge. Prior approaches attribute hallucinations to a binary conflict between internal knowledge stored in FFNs and the retrieved context. However, this perspective is incomplete, failing to account for the impact of other components of the LLM, such as the user query, previously generated tokens, the self token, and the final LayerNorm adjustment. To comprehensively capture the impact of these components on hallucination detection, we propose TPA which mathematically attributes each token's probability to seven distinct sources: Query, RAG Context, Past Token, Self Token, FFN, Final LayerNorm, and Initial Embedding. This attribution quantifies how each source contributes to the generation of the next token. Specifically, we aggregate these attribution scores by Part-of-Speech (POS) tags to quantify the contribution of each model component to the generation of specific linguistic categories within a response. By leveraging these patterns, such as detecting anomalies where Nouns rely heavily on LayerNorm, TPA effectively identifies hallucinated responses. Extensive experiments show that TPA achieves state-of-the-art performance.

Pengqian Lu, Jie Lu, Anjin Liu, Guangquan Zhang• 2025

Related benchmarks

TaskDatasetResultRank
Hallucination DetectionRAGTruth (test)
AUROC0.9096
83
Hallucination DetectionDolly AC (test)
AUC81.59
33
Hallucination DetectionRAGTruth LLaMA2-7B
Recall0.8328
19
Hallucination DetectionRAGTruth LLaMA3-8B
Recall78.6
19
Hallucination DetectionDolly AC LLaMA2-13B
Recall0.9741
19
Hallucination DetectionRAGTruth LLaMA2-13B
Recall79.13
19
Hallucination DetectionDolly AC LLaMA2-7B
Recall78.97
19
Hallucination DetectionDolly AC LLaMA3-8B
Recall65.61
19
Showing 8 of 8 rows

Other info

Follow for update