Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

CausalGaze: Unveiling Hallucinations via Counterfactual Graph Intervention in Large Language Models

About

Despite the groundbreaking advancements made by large language models (LLMs), hallucination remains a critical bottleneck for their deployment in high-stakes domains. Existing classification-based methods mainly rely on static and passive signals from internal states, which often captures the noise and spurious correlations, while overlooking the underlying causal mechanisms. To address this limitation, we shift the paradigm from passive observation to active intervention by introducing CausalGaze, a novel hallucination detection framework based on structural causal models (SCMs). CausalGaze models LLMs' internal states as dynamic causal graphs and employs counterfactual interventions to disentangle causal reasoning paths from incidental noise, thereby enhancing model interpretability. Extensive experiments across four datasets and three widely used LLMs demonstrate the effectiveness of CausalGaze, especially achieving over 5.2\% improvement in AUROC on the TruthfulQA dataset compared to state-of-the-art baselines.

Linggang Kong, Lei Wu, Yunlong Zhang, Xiaofeng Zhong, Zhen Wang, Yongjie Wang, Yao Pan• 2026

Related benchmarks

TaskDatasetResultRank
Hallucination DetectionTriviaQA
AUROC0.9106
438
Hallucination DetectionHaluEval
F1 Score83.6
75
Hallucination DetectionTruthfulQA
AUROC0.8851
33
Hallucination DetectionSciQ
AUROC0.9328
33
Showing 4 of 4 rows

Other info

Follow for update