Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

GAP: A Graph-aware Language Model Framework for Knowledge Graph-to-Text Generation

About

Recent improvements in KG-to-text generation are due to additional auxiliary pre-training tasks designed to give the fine-tune task a boost in performance. These tasks require extensive computational resources while only suggesting marginal improvements. Here, we demonstrate that by fusing graph-aware elements into existing pre-trained language models, we are able to outperform state-of-the-art models and close the gap imposed by additional pre-training tasks. We do so by proposing a mask structure to capture neighborhood information and a novel type encoder that adds a bias to the graph-attention weights depending on the connection type. Experiments on two KG-to-text benchmark datasets show our models are competitive while involving fewer parameters and no additional pre-training tasks. By formulating the problem as a framework, we can interchange the various proposed components and begin interpreting KG-to-text generative models based on the topological and type information found in a graph.

Anthony Colas, Mehrdad Alvandipour, Daisy Zhe Wang• 2022

Related benchmarks

TaskDatasetResultRank
Graph-to-TextWebNLG v2.0 (test)
BLEU66.2
9
Medical Question AnsweringBioASQ (test)
ROUGE-126.5
8
Medical Question AnsweringCMedQA (test)
ROUGE-113.23
8
Graph-to-text generationEventNarrative (test)
BLEU35.08
5
Showing 4 of 4 rows

Other info

Code

Follow for update