Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

KG-GPT: A General Framework for Reasoning on Knowledge Graphs Using Large Language Models

About

While large language models (LLMs) have made considerable advancements in understanding and generating unstructured text, their application in structured data remains underexplored. Particularly, using LLMs for complex reasoning tasks on knowledge graphs (KGs) remains largely untouched. To address this, we propose KG-GPT, a multi-purpose framework leveraging LLMs for tasks employing KGs. KG-GPT comprises three steps: Sentence Segmentation, Graph Retrieval, and Inference, each aimed at partitioning sentences, retrieving relevant graph components, and deriving logical conclusions, respectively. We evaluate KG-GPT using KG-based fact verification and KGQA benchmarks, with the model showing competitive and robust performance, even outperforming several fully-supervised models. Our work, therefore, marks a significant step in unifying structured and unstructured data processing within the realm of LLMs.

Jiho Kim, Yeonsu Kwon, Yohan Jo, Edward Choi• 2023

Related benchmarks

TaskDatasetResultRank
Question AnsweringMetaQA 3-hop
Hits@188.2
47
Knowledge Base Question AnsweringMetaQA 1hop
Hits@193.6
28
Knowledge Graph Question AnsweringMetaQA 2-hop (test)
Hits@193.6
24
Claim VerificationFactKG (test)
Average Accuracy74.7
20
Fact VerificationFACTKG 1.0 (test)
Accuracy72.7
9
Knowledge Graph Question AnsweringMetaQA 2-hop 1.0 (test)
Accuracy94.4
9
Knowledge Graph Question AnsweringMetaQA 1-hop 1.0 (test)
Accuracy (%)96.3
9
Single-hop QAAssembly Knowledge Graph QA Single-hop (test)
Accuracy75.7
5
Multi-hop QAAssembly Knowledge Graph QA Multi-hop (test)
nLCS45.7
5
Knowledge Graph Question AnsweringPathQuestions (PQ) 2-hop (test)
Hits@186.1
4
Showing 10 of 12 rows

Other info

Follow for update