Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Knowledge Reasoning Language Model: Unifying Knowledge and Language for Inductive Knowledge Graph Reasoning

About

Inductive Knowledge Graph Reasoning (KGR) aims to discover facts in open-domain KGs containing unknown entities and relations, which poses a challenge for KGR models in comprehending uncertain KG components. Existing studies have proposed Knowledge Graph Foundation Models (KGFMs) that learn structural invariances across KGs to handle this uncertainty. Recently, Large Language Models (LLMs) have demonstrated strong capabilities for open-domain knowledge reasoning. As a result, the latest research has focused on LLM-based KGFMs that integrate LLM knowledge with KG context for inductive KGR. However, the intrinsic knowledge of LLMs may be overshadowed by sparse KG context, leading to LLM knowledge distortion, which can cause irreversible damage to model reasoning. Moreover, existing LLM-based KGR methods still struggle to fully constrain generative hallucinations in LLMs, severely limiting the credibility of reasoning results. To address these limitations, we propose a Knowledge Reasoning Language Model (KRLM) that achieves unified coordination between LLM knowledge and KG context throughout the KGR process. Specifically, we design a Knowledge Reasoning Language (KRL) instruction format and a KRL tokenizer to align LLM knowledge with KG representations. Then, we propose a KRL attention layer that coordinates intrinsic LLM knowledge with additional KG context through a dynamic knowledge memory mechanism. Finally, a structure-aware next-entity predictor is proposed, which strictly constrains the reasoning results within a trustworthy knowledge domain. Extensive experimental results on 25 real-world inductive KGR datasets demonstrate the significant superiority of the proposed KRLM\footnote{Our source codes are available at https://anonymous.4open.science/r/KRLM-EA36 in both zero-shot reasoning and fine-tuning scenarios.

Xingrui Zhuo, Jiapu Wang, Gongqing Wu, Zhongyuan Wang, Jichen Zhang, Shirui Pan, Xindong Wu• 2025

Related benchmarks

TaskDatasetResultRank
Knowledge Graph ReasoningFB15k-237 (test)--
29
Inductive Link PredictionFB v1
Hit@100.708
11
Inductive Link PredictionFB v2
Hit@1075.2
11
Inductive Link PredictionFB v4
Hit@1069.9
11
Inductive Link PredictionNELL V1
Hit@1091.6
11
Inductive Link PredictionNELL V2
Hit@1079.1
11
Inductive Link PredictionNELL V3
Hit@100.768
11
Knowledge Graph ReasoningIndE 12 datasets (test)
Hit@1075.1
11
Inductive Link PredictionNELL V4
Hit@1077.2
11
Inductive Link PredictionWN v2
Hit@100.799
11
Showing 10 of 34 rows

Other info

Follow for update