Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Inductive Entity Representations from Text via Link Prediction

About

Knowledge Graphs (KG) are of vital importance for multiple applications on the web, including information retrieval, recommender systems, and metadata annotation. Regardless of whether they are built manually by domain experts or with automatic pipelines, KGs are often incomplete. Recent work has begun to explore the use of textual descriptions available in knowledge graphs to learn vector representations of entities in order to preform link prediction. However, the extent to which these representations learned for link prediction generalize to other tasks is unclear. This is important given the cost of learning such representations. Ideally, we would prefer representations that do not need to be trained again when transferring to a different task, while retaining reasonable performance. In this work, we propose a holistic evaluation protocol for entity representations learned via a link prediction objective. We consider the inductive link prediction and entity classification tasks, which involve entities not seen during training. We also consider an information retrieval task for entity-oriented search. We evaluate an architecture based on a pretrained language model, that exhibits strong generalization to entities not observed during training, and outperforms related state-of-the-art methods (22% MRR improvement in link prediction on average). We further provide evidence that the learned representations transfer well to other tasks without fine-tuning. In the entity classification task we obtain an average improvement of 16% in accuracy compared with baselines that also employ pre-trained models. In the information retrieval task, we obtain significant improvements of up to 8.8% in NDCG@10 for natural language queries. We thus show that the learned representations are not limited KG-specific tasks, and have greater generalization properties than evaluated in previous work.

Daniel Daza, Michael Cochez, Paul Groth• 2020

Related benchmarks

TaskDatasetResultRank
Link PredictionFB15k-237 (test)
Hits@1041.1
419
Link PredictionWN18RR (test)
Hits@1058
380
Link PredictionWikidata5M (test)
MRR0.493
58
Inductive Link PredictionFB15k-237 inductive (test)
Hits@100.363
37
Inductive Link PredictionWN18RR inductive (test)
MRR0.285
30
Link PredictionWN18RR transductive (test)
MRR0.325
30
Inductive Link PredictionWikidata5M IND (test)
MRR0.478
13
Entity ClassificationWN18RR (test)
Accuracy99.5
12
Entity ClassificationFB15k-237 (test)
Accuracy85.8
12
Information RetrievalDBpedia-Entity ListSearch v2 (test)
NDCG@100.442
9
Showing 10 of 15 rows

Other info

Code

Follow for update