Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

HaloScope: Harnessing Unlabeled LLM Generations for Hallucination Detection

About

The surge in applications of large language models (LLMs) has prompted concerns about the generation of misleading or fabricated information, known as hallucinations. Therefore, detecting hallucinations has become critical to maintaining trust in LLM-generated content. A primary challenge in learning a truthfulness classifier is the lack of a large amount of labeled truthful and hallucinated data. To address the challenge, we introduce HaloScope, a novel learning framework that leverages the unlabeled LLM generations in the wild for hallucination detection. Such unlabeled data arises freely upon deploying LLMs in the open world, and consists of both truthful and hallucinated information. To harness the unlabeled data, we present an automated membership estimation score for distinguishing between truthful and untruthful generations within unlabeled mixture data, thereby enabling the training of a binary truthfulness classifier on top. Importantly, our framework does not require extra data collection and human annotations, offering strong flexibility and practicality for real-world applications. Extensive experiments show that HaloScope can achieve superior hallucination detection performance, outperforming the competitive rivals by a significant margin. Code is available at https://github.com/deeplearningwisc/haloscope.

Xuefeng Du, Chaowei Xiao, Yixuan Li• 2024

Related benchmarks

TaskDatasetResultRank
Hallucination DetectionTriviaQA
AUROC0.8654
265
Hallucination DetectionTruthfulQA
AUC (ROC)0.7864
47
Hallucination DetectionNQ-Open
AUROC0.8584
27
Hallucination DetectionMMLU-Pro
AUROC81.08
15
Hallucination DetectionLLaMa 1 (test)
AUROC0.861
15
Hallucination DetectionWebQuestions
AUROC80.43
15
Hallucination DetectionTyDiQA-GP
AUC ROC0.9404
8
Hallucination DetectionMMEvalPro perception
F1 (Faithful)91.5
5
Showing 8 of 8 rows

Other info

Follow for update