JointGT: Graph-Text Joint Representation Learning for Text Generation from Knowledge Graphs
About
Existing pre-trained models for knowledge-graph-to-text (KG-to-text) generation simply fine-tune text-to-text pre-trained models such as BART or T5 on KG-to-text datasets, which largely ignore the graph structure during encoding and lack elaborate pre-training tasks to explicitly model graph-text alignments. To tackle these problems, we propose a graph-text joint representation learning model called JointGT. During encoding, we devise a structure-aware semantic aggregation module which is plugged into each Transformer layer to preserve the graph structure. Furthermore, we propose three new pre-training tasks to explicitly enhance the graph-text alignment including respective text / graph reconstruction, and graph-text alignment in the embedding space via Optimal Transport. Experiments show that JointGT obtains new state-of-the-art performance on various KG-to-text datasets.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Knowledge Base Question Generation | WEBQUESTIONS (test) | METEOR32.05 | 12 | |
| Knowledge Base Question Generation | PathQuestions (test) | METEOR48.25 | 12 | |
| Graph-to-Text | WebNLG v2.0 (test) | BLEU65.92 | 9 | |
| RDF-to-Text Generation | WebNLG Unconstrained 2.0 (test) | BLEU0.6614 | 6 | |
| RDF-to-Text Generation | WebNLG Constrained 2.0 (test) | BLEU61.01 | 5 | |
| Data-to-text generation | WebNLG KG | BLEU (Unigram)37.2 | 5 | |
| Graph-to-text generation | EventNarrative (test) | BLEU31.19 | 5 | |
| KG-to-text generation | WebNLG U (test) | Fluency Win Rate29 | 2 |