PALT: Parameter-Lite Transfer of Language Models for Knowledge Graph Completion
About
This paper presents a parameter-lite transfer learning approach of pretrained language models (LM) for knowledge graph (KG) completion. Instead of finetuning, which modifies all LM parameters, we only tune a few new parameters while keeping the original LM parameters fixed. We establish this via reformulating KG completion as a "fill-in-the-blank" task, and introducing a parameter-lite encoder on top of the original LMs. We show that, by tuning far fewer parameters than finetuning, LMs transfer non-trivially to most tasks and reach competitiveness with prior state-of-the-art approaches. For instance, we outperform the fully finetuning approaches on a KG completion benchmark by tuning only 1% of the parameters. The code and datasets are available at \url{https://github.com/yuanyehome/PALT}.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Link Prediction | FB15k-237 | -- | 280 | |
| Link Prediction | WN18RR | Hits@1069.3 | 175 | |
| Link Prediction | UMLS | Hits@1099 | 56 | |
| Triple classification | WN11 (test) | Accuracy93.8 | 55 | |
| Triple classification | FB13 (test) | Accuracy91.7 | 55 |