Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

CodeKGC: Code Language Model for Generative Knowledge Graph Construction

About

Current generative knowledge graph construction approaches usually fail to capture structural knowledge by simply flattening natural language into serialized texts or a specification language. However, large generative language model trained on structured data such as code has demonstrated impressive capability in understanding natural language for structural prediction and reasoning tasks. Intuitively, we address the task of generative knowledge graph construction with code language model: given a code-format natural language input, the target is to generate triples which can be represented as code completion tasks. Specifically, we develop schema-aware prompts that effectively utilize the semantic structure within the knowledge graph. As code inherently possesses structure, such as class and function definitions, it serves as a useful model for prior semantic structural knowledge. Furthermore, we employ a rationale-enhanced generation method to boost the performance. Rationales provide intermediate steps, thereby improving knowledge extraction abilities. Experimental results indicate that the proposed approach can obtain better performance on benchmark datasets compared with baselines. Code and datasets are available in https://github.com/zjunlp/DeepKE/tree/main/example/llm.

Zhen Bi, Jing Chen, Yinuo Jiang, Feiyu Xiong, Wei Guo, Huajun Chen, Ningyu Zhang• 2023

Related benchmarks

TaskDatasetResultRank
Multimodal Named Entity RecognitionTWITTER 2017
F1 Score85.25
22
Multimodal Named Entity RecognitionTWITTER 2015
F1 Score73.62
21
Multimodal Information ExtractionM3D Chinese (ZH)
Entity Recognition F180.31
7
Multimodal Information ExtractionM3D English
Entity Recognition F177.04
7
Multimodal Relation ExtractionMNRE
F1 Score69.68
7
Showing 5 of 5 rows

Other info

Follow for update