Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

HalluShift: Measuring Distribution Shifts towards Hallucination Detection in LLMs

About

Large Language Models (LLMs) have recently garnered widespread attention due to their adeptness at generating innovative responses to the given prompts across a multitude of domains. However, LLMs often suffer from the inherent limitation of hallucinations and generate incorrect information while maintaining well-structured and coherent responses. In this work, we hypothesize that hallucinations stem from the internal dynamics of LLMs. Our observations indicate that, during passage generation, LLMs tend to deviate from factual accuracy in subtle parts of responses, eventually shifting toward misinformation. This phenomenon bears a resemblance to human cognition, where individuals may hallucinate while maintaining logical coherence, embedding uncertainty within minor segments of their speech. To investigate this further, we introduce an innovative approach, HalluShift, designed to analyze the distribution shifts in the internal state space and token probabilities of the LLM-generated responses. Our method attains superior performance compared to existing baselines across various benchmark datasets. Our codebase is available at https://github.com/sharanya-dasgupta001/hallushift.

Sharanya Dasgupta, Sujoy Nath, Arkaprabha Basu, Pourya Shamsolmoali, Swagatam Das• 2025

Related benchmarks

TaskDatasetResultRank
Hallucination DetectionTruthfulQA
AUC (ROC)0.8993
47
Hallucination DetectionMSCOCO
AUC ROC65.8
19
Hallucination Detectionllava
AUC ROC71.1
19
Hallucination DetectionTyDiQA-GP
AUC ROC0.8761
8
Showing 4 of 4 rows

Other info

Follow for update