Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Complex Logical Query Answering by Calibrating Knowledge Graph Completion Models

About

Complex logical query answering (CLQA) is a challenging task that involves finding answer entities for complex logical queries over incomplete knowledge graphs (KGs). Previous research has explored the use of pre-trained knowledge graph completion (KGC) models, which can predict the missing facts in KGs, to answer complex logical queries. However, KGC models are typically evaluated using ranking evaluation metrics, which may result in values of predictions of KGC models that are not well-calibrated. In this paper, we propose a method for calibrating KGC models, namely CKGC, which enables KGC models to adapt to answering complex logical queries. Notably, CKGC is lightweight and effective. The adaptation function is simple, allowing the model to quickly converge during the adaptation process. The core concept of CKGC is to map the values of predictions of KGC models to the range [0, 1], ensuring that values associated with true facts are close to 1, while values linked to false facts are close to 0. Through experiments on three benchmark datasets, we demonstrate that our proposed calibration method can significantly boost model performance in the CLQA task. Moreover, our approach can enhance the performance of CLQA while preserving the ranking evaluation metrics of KGC models. The code is available at https://github.com/changyi7231/CKGC.

Changyi Xiao, Yixin Cao• 2024

Related benchmarks

TaskDatasetResultRank
Complex Query AnsweringNELL-995 (test)
Hits@1 (1p)61.4
31
Complex Query AnsweringFB15K (test)
Hits@1 (1p)89.2
30
Complex Query AnsweringFB15k-237 (test)
Hits@1 (avg path)0.348
27
Showing 3 of 3 rows

Other info

Code

Follow for update