Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

FiDeLiS: Faithful Reasoning in Large Language Model for Knowledge Graph Question Answering

About

Large Language Models (LLMs) are often challenged by generating erroneous or hallucinated responses, especially in complex reasoning tasks. Leveraging Knowledge Graphs (KGs) as external knowledge sources has emerged as a viable solution. However, existing KG-enhanced methods, either retrieval-based or agent-based, encounter difficulties in accurately retrieving knowledge and efficiently traversing KGs at scale. In this paper, we propose a unified framework, FiDeLiS, designed to improve the factuality of LLM responses by anchoring answers to verifiable reasoning steps retrieved from KGs. To achieve this, we leverage step-wise beam search with a deductive scoring function, allowing the LLM to validate reasoning process step by step, and halt the search once the question is deducible. In addition, we propose a Path-RAG module to pre-select a smaller candidate set for each beam search step, reducing computational costs by narrowing the search space. Extensive experiments show that our method, as a training-free framework, not only improve the performance but also enhance the factuality and interpretability across different benchmarks. Code is released at https://github.com/Y-Sui/FiDeLiS.

Yuan Sui, Yufei He, Nian Liu, Xiaoxin He, Kun Wang, Bryan Hooi• 2024

Related benchmarks

TaskDatasetResultRank
Knowledge Base Question AnsweringWEBQSP (test)
Hit@174.11
143
Knowledge Graph Question AnsweringWebQSP
Hit@184.4
122
Knowledge Graph Question AnsweringCWQ
Hit@171.5
105
Knowledge Graph Question AnsweringCWQ (test)
Hits@160.71
69
Knowledge Graph Question AnsweringCR-LT
Accuracy72.12
11
Knowledge Graph Question AnsweringCR-LT (test)
Accuracy63.12
2
Showing 6 of 6 rows

Other info

Follow for update