Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

InterrogateLLM: Zero-Resource Hallucination Detection in LLM-Generated Answers

About

Despite the many advances of Large Language Models (LLMs) and their unprecedented rapid evolution, their impact and integration into every facet of our daily lives is limited due to various reasons. One critical factor hindering their widespread adoption is the occurrence of hallucinations, where LLMs invent answers that sound realistic, yet drift away from factual truth. In this paper, we present a novel method for detecting hallucinations in large language models, which tackles a critical issue in the adoption of these models in various real-world scenarios. Through extensive evaluations across multiple datasets and LLMs, including Llama-2, we study the hallucination levels of various recent LLMs and demonstrate the effectiveness of our method to automatically detect them. Notably, we observe up to 87% hallucinations for Llama-2 in a specific experiment, where our method achieves a Balanced Accuracy of 81%, all without relying on external knowledge.

Yakir Yehuda, Itzik Malkiel, Oren Barkan, Jonathan Weill, Royi Ronen, Noam Koenigstein• 2024

Related benchmarks

TaskDatasetResultRank
Hallucination DetectionTriviaQA
AUROC0.6992
265
Hallucination DetectionMATH
Mean AUROC70.41
72
Hallucination DetectionSVAMP
Mean AUROC70.5
48
Hallucination DetectionCommonsenseQA
Mean AUROC0.703
48
Hallucination DetectionBelebele
Mean AUROC0.6993
48
Hallucination DetectionAverage Cross-domain
Mean AUROC0.702
48
Hallucination DetectionCoQA
Mean AUROC0.7013
48
Showing 7 of 7 rows

Other info

Follow for update