Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

KICGPT: Large Language Model with Knowledge in Context for Knowledge Graph Completion

About

Knowledge Graph Completion (KGC) is crucial for addressing knowledge graph incompleteness and supporting downstream applications. Many models have been proposed for KGC. They can be categorized into two main classes: triple-based and text-based approaches. Triple-based methods struggle with long-tail entities due to limited structural information and imbalanced entity distributions. Text-based methods alleviate this issue but require costly training for language models and specific finetuning for knowledge graphs, which limits their efficiency. To alleviate these limitations, in this paper, we propose KICGPT, a framework that integrates a large language model (LLM) and a triple-based KGC retriever. It alleviates the long-tail problem without incurring additional training overhead. KICGPT uses an in-context learning strategy called Knowledge Prompt, which encodes structural knowledge into demonstrations to guide the LLM. Empirical results on benchmark datasets demonstrate the effectiveness of KICGPT with smaller training overhead and no finetuning.

Yanbin Wei, Qiushi Huang, James T. Kwok, Yu Zhang• 2024

Related benchmarks

TaskDatasetResultRank
Link PredictionFB15k-237
MRR41.2
293
Link PredictionWN18RR
Hits@1064.1
188
Knowledge Graph CompletionWN18RR
Hits@147.4
165
Knowledge Graph CompletionFB15k-237
Hits@100.554
108
Knowledge Graph ReasoningFB15k-237 (test)--
29
Knowledge Graph ReasoningWN18RR transductive (test)
Hit@1064.1
9
Showing 6 of 6 rows

Other info

Follow for update