Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Finetuning Generative Large Language Models with Discrimination Instructions for Knowledge Graph Completion

About

Traditional knowledge graph (KG) completion models learn embeddings to predict missing facts. Recent works attempt to complete KGs in a text-generation manner with large language models (LLMs). However, they need to ground the output of LLMs to KG entities, which inevitably brings errors. In this paper, we present a finetuning framework, DIFT, aiming to unleash the KG completion ability of LLMs and avoid grounding errors. Given an incomplete fact, DIFT employs a lightweight model to obtain candidate entities and finetunes an LLM with discrimination instructions to select the correct one from the given candidates. To improve performance while reducing instruction data, DIFT uses a truncated sampling method to select useful facts for finetuning and injects KG embeddings into the LLM. Extensive experiments on benchmark datasets demonstrate the effectiveness of our proposed framework.

Yang Liu, Xiaobin Tian, Zequn Sun, Wei Hu• 2024

Related benchmarks

TaskDatasetResultRank
Knowledge Base CompletionCWQ (30% KB)
MRR50.4
16
Knowledge Base CompletionWebQSP 30% KB
MRR0.515
16
Knowledge Base CompletionWebQSP 50% KB
MRR53.6
16
Knowledge Base CompletionCWQ 50% KB
MRR52.1
16
Showing 4 of 4 rows

Other info

Follow for update