Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

K-ON: Stacking Knowledge On the Head Layer of Large Language Model

About

Recent advancements in large language models (LLMs) have significantly improved various natural language processing (NLP) tasks. Typically, LLMs are trained to predict the next token, aligning well with many NLP tasks. However, in knowledge graph (KG) scenarios, entities are the fundamental units and identifying an entity requires at least several tokens. This leads to a granularity mismatch between KGs and natural languages. To address this issue, we propose K-ON, which integrates KG knowledge into the LLM by employing multiple head layers for next k-step prediction. K-ON can not only generate entity-level results in one step, but also enables contrastive loss against entities, which is the most powerful tool in KG representation learning. Experimental results show that K-ON outperforms state-of-the-art methods that incorporate text and even the other modalities.

Lingbing Guo, Yichi Zhang, Zhongpu Bo, Zhuo Chen, Mengshu Sun, Zhiqiang Zhang, Wen Zhang, Huajun Chen• 2025

Related benchmarks

TaskDatasetResultRank
Knowledge Graph CompletionDB15K
MRR38.1
22
Knowledge Graph CompletionMKG-W
MRR0.3664
22
Knowledge Graph CompletionOverall DB15K, MKG-W, MKG-Y
MRR36.86
22
Knowledge Graph CompletionMKG-Y
MRR35.83
22
Showing 4 of 4 rows

Other info

Follow for update