Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

GraphEval: A Knowledge-Graph Based LLM Hallucination Evaluation Framework

About

Methods to evaluate Large Language Model (LLM) responses and detect inconsistencies, also known as hallucinations, with respect to the provided knowledge, are becoming increasingly important for LLM applications. Current metrics fall short in their ability to provide explainable decisions, systematically check all pieces of information in the response, and are often too computationally expensive to be used in practice. We present GraphEval: a hallucination evaluation framework based on representing information in Knowledge Graph (KG) structures. Our method identifies the specific triples in the KG that are prone to hallucinations and hence provides more insight into where in the response a hallucination has occurred, if at all, than previous methods. Furthermore, using our approach in conjunction with state-of-the-art natural language inference (NLI) models leads to an improvement in balanced accuracy on various hallucination benchmarks, compared to using the raw NLI models. Lastly, we explore the use of GraphEval for hallucination correction by leveraging the structure of the KG, a method we name GraphCorrect, and demonstrate that the majority of hallucinations can indeed be rectified.

Hannah Sansford, Nicholas Richardson, Hermina Petric Maretic, Juba Nait Saada• 2024

Related benchmarks

TaskDatasetResultRank
Fact CheckingPubHealth
Balanced Accuracy63.7
26
Fact CheckingCOVID-Fact
Balanced Acc60.7
22
Fact CheckingAggreFact CNN
Balanced Acc69.5
15
Fact CheckingSummEval
Balanced Accuracy69.7
15
Fact CheckingAggreFact Xsum
Balanced Accuracy67.6
15
Fact CheckingAverage across General and Medical Domains
Overall Average65.1
15
Fact CheckingExpertQA
Balanced Accuracy56
15
Fact CheckingSciFact
Balanced Acc68.4
15
Fact CheckingReveal
Balanced Accuracy89.8
7
Fact CheckingMiniCheck (test)
Balanced Accuracy72.1
6
Showing 10 of 12 rows

Other info

Follow for update