Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ReDeEP: Detecting Hallucination in Retrieval-Augmented Generation via Mechanistic Interpretability

About

Retrieval-Augmented Generation (RAG) models are designed to incorporate external knowledge, reducing hallucinations caused by insufficient parametric (internal) knowledge. However, even with accurate and relevant retrieved content, RAG models can still produce hallucinations by generating outputs that conflict with the retrieved information. Detecting such hallucinations requires disentangling how Large Language Models (LLMs) utilize external and parametric knowledge. Current detection methods often focus on one of these mechanisms or without decoupling their intertwined effects, making accurate detection difficult. In this paper, we investigate the internal mechanisms behind hallucinations in RAG scenarios. We discover hallucinations occur when the Knowledge FFNs in LLMs overemphasize parametric knowledge in the residual stream, while Copying Heads fail to effectively retain or integrate external knowledge from retrieved content. Based on these findings, we propose ReDeEP, a novel method that detects hallucinations by decoupling LLM's utilization of external context and parametric knowledge. Our experiments show that ReDeEP significantly improves RAG hallucination detection accuracy. Additionally, we introduce AARF, which mitigates hallucinations by modulating the contributions of Knowledge FFNs and Copying Heads.

Zhongxiang Sun, Xiaoxue Zang, Kai Zheng, Yang Song, Jun Xu, Xiao Zhang, Weijie Yu, Yang Song, Han Li• 2024

Related benchmarks

TaskDatasetResultRank
Hallucination DetectionCoQA
Mean AUROC0.62
100
Hallucination DetectionRAGTruth (test)
AUROC0.8244
83
Medical LLM Risk TriageRETINA-SAFE Stage-1
Unsafe Recall96.23
60
Hallucination DetectionRAGTruth MS MARCO (subsample)
AUROC0.72
45
Hallucination DetectionRAGTruth CNN/DM (subsample)
AUROC0.57
45
Hallucination DetectionDolly AC (test)
AUC58.42
33
Hallucination DetectionRAGTruth RT-Summ 1.0 (test)
F1 Score0.5897
30
Hallucination DetectionRAGTruth RT-QA 1.0 (test)
F1 Score0.627
30
Hallucination DetectionRAGTruth RT-D2T 1.0 (test)
F1 Score0.6013
30
Hallucination DetectionHalluRAG 1.0 (test)
F1 Score0.5912
30
Showing 10 of 23 rows

Other info

Follow for update