Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Detecting Hallucinations in Large Language Model Generation: A Token Probability Approach

About

Concerns regarding the propensity of Large Language Models (LLMs) to produce inaccurate outputs, also known as hallucinations, have escalated. Detecting them is vital for ensuring the reliability of applications relying on LLM-generated content. Current methods often demand substantial resources and rely on extensive LLMs or employ supervised learning with multidimensional features or intricate linguistic and semantic analyses difficult to reproduce and largely depend on using the same LLM that hallucinated. This paper introduces a supervised learning approach employing two simple classifiers utilizing only four numerical features derived from tokens and vocabulary probabilities obtained from other LLM evaluators, which are not necessarily the same. The method yields promising results, surpassing state-of-the-art outcomes in multiple tasks across three different benchmarks. Additionally, we provide a comprehensive examination of the strengths and weaknesses of our approach, highlighting the significance of the features utilized and the LLM employed as an evaluator. We have released our code publicly at https://github.com/Baylor-AI/HalluDetect.

Ernesto Quevedo, Jorge Yero, Rachel Koerner, Pablo Rivas, Tomas Cerny• 2024

Related benchmarks

TaskDatasetResultRank
Hallucination DetectionRAGTruth (test)
AUROC0.663
83
Hallucination DetectionFAVA-Annotations (test)
AUROC53.88
9
Hallucination DetectionHUB (test)
Algorithmic26.44
7
Showing 3 of 3 rows

Other info

Follow for update