Dipping PLMs Sauce: Bridging Structure and Text for Effective Knowledge Graph Completion via Conditional Soft Prompting
About
Knowledge Graph Completion (KGC) often requires both KG structural and textual information to be effective. Pre-trained Language Models (PLMs) have been used to learn the textual information, usually under the fine-tune paradigm for the KGC task. However, the fine-tuned PLMs often overwhelmingly focus on the textual information and overlook structural knowledge. To tackle this issue, this paper proposes CSProm-KG (Conditional Soft Prompts for KGC) which maintains a balance between structural information and textual knowledge. CSProm-KG only tunes the parameters of Conditional Soft Prompts that are generated by the entities and relations representations. We verify the effectiveness of CSProm-KG on three popular static KGC benchmarks WN18RR, FB15K-237 and Wikidata5M, and two temporal KGC benchmarks ICEWS14 and ICEWS05-15. CSProm-KG outperforms competitive baseline models and sets new state-of-the-art on these benchmarks. We conduct further analysis to show (i) the effectiveness of our proposed components, (ii) the efficiency of CSProm-KG, and (iii) the flexibility of CSProm-KG.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Link Prediction | WN18RR (test) | Hits@1067.8 | 380 | |
| Link Prediction | FB15k-237 | MRR35.8 | 280 | |
| Knowledge Graph Completion | WN18RR | Hits@152.2 | 165 | |
| Knowledge Graph Completion | FB15k-237 | Hits@100.538 | 108 | |
| Link Prediction | YAGO3-10 | MRR0.488 | 33 | |
| Link Prediction | PrimeKG | MR157 | 8 |