Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language Models

About

Large language models (LLMs), such as ChatGPT, are prone to generate hallucinations, i.e., content that conflicts with the source or cannot be verified by the factual knowledge. To understand what types of content and to which extent LLMs are apt to hallucinate, we introduce the Hallucination Evaluation benchmark for Large Language Models (HaluEval), a large collection of generated and human-annotated hallucinated samples for evaluating the performance of LLMs in recognizing hallucination. To generate these samples, we propose a ChatGPT-based two-step framework, i.e., sampling-then-filtering. Besides, we also hire some human labelers to annotate the hallucinations in ChatGPT responses. The empirical results suggest that ChatGPT is likely to generate hallucinated content in specific topics by fabricating unverifiable information (i.e., about $19.5\%$ responses). Moreover, existing LLMs face great challenges in recognizing the hallucinations in texts. However, our experiments also prove that providing external knowledge or adding reasoning steps can help LLMs recognize hallucinations. Our benchmark can be accessed at https://github.com/RUCAIBox/HaluEval.

Junyi Li, Xiaoxue Cheng, Wayne Xin Zhao, Jian-Yun Nie, Ji-Rong Wen• 2023

Related benchmarks

TaskDatasetResultRank
Hallucination DetectionHELM Passage Level v1.0 (test)
AUC0.9196
84
Hallucination DetectionHELM Sentence Level v1.0 (test)
AUC0.7972
84
Hallucination DetectionFELM worldknowledge (test)
Nonfact Accuracy42.9
15
Hallucination DetectionOpenDialKG Eval (test)
Macro F170.2
7
Hallucination DetectionLLaMA2-7B-Chat
Detection Percentage282.2
7
Hallucination RegenerationHaluEval QA
Accuracy68.55
5
Showing 6 of 6 rows

Other info

Follow for update