Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DualDE: Dually Distilling Knowledge Graph Embedding for Faster and Cheaper Reasoning

About

Knowledge Graph Embedding (KGE) is a popular method for KG reasoning and training KGEs with higher dimension are usually preferred since they have better reasoning capability. However, high-dimensional KGEs pose huge challenges to storage and computing resources and are not suitable for resource-limited or time-constrained applications, for which faster and cheaper reasoning is necessary. To address this problem, we propose DualDE, a knowledge distillation method to build low-dimensional student KGE from pre-trained high-dimensional teacher KGE. DualDE considers the dual-influence between the teacher and the student. In DualDE, we propose a soft label evaluation mechanism to adaptively assign different soft label and hard label weights to different triples, and a two-stage distillation approach to improve the student's acceptance of the teacher. Our DualDE is general enough to be applied to various KGEs. Experimental results show that our method can successfully reduce the embedding parameters of a high-dimensional KGE by 7 times - 15 times and increase the inference speed by 2 times - 6 times while retaining a high performance. We also experimentally prove the effectiveness of our soft label evaluation mechanism and two-stage distillation approach via ablation study.

Yushan Zhu, Wen Zhang, Mingyang Chen, Hui Chen, Xu Cheng, Wei Zhang, Huajun Chen (1) __INSTITUTION_7__ Zhejiang University, (2) Alibaba Group, (3) CETC Big Data Research Institute)• 2020

Related benchmarks

TaskDatasetResultRank
Link PredictionFB15k-237 (test)
Hits@1051.2
419
Link PredictionWN18RR (test)
Hits@1056
380
Link PredictionFB15k-237--
280
Product RecommendationSKG
NDCG@50.423
8
Showing 4 of 4 rows

Other info

Follow for update