Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Graph Pre-training for AMR Parsing and Generation

About

Abstract meaning representation (AMR) highlights the core semantic information of text in a graph structure. Recently, pre-trained language models (PLMs) have advanced tasks of AMR parsing and AMR-to-text generation, respectively. However, PLMs are typically pre-trained on textual data, thus are sub-optimal for modeling structural knowledge. To this end, we investigate graph self-supervised training to improve the structure awareness of PLMs over AMR graphs. In particular, we introduce two graph auto-encoding strategies for graph-to-graph pre-training and four tasks to integrate text and graph information during pre-training. We further design a unified framework to bridge the gap between pre-training and fine-tuning tasks. Experiments on both AMR parsing and AMR-to-text generation show the superiority of our model. To our knowledge, we are the first to consider pre-training on semantic graphs.

Xuefeng Bai, Yulong Chen, Yue Zhang• 2022

Related benchmarks

TaskDatasetResultRank
AMR parsingLDC2017T10 AMR 2.0 (test)
Smatch85.4
168
AMR parsingAMR 3.0 (test)
SMATCH84.2
45
AMR parsingAMR 3.0 LDC2020T02 (test)
Smatch Labeled84.2
14
AMR-to-text generationAMR 2.0 (test)
BLEU49.8
10
Text-to-UMR ParsingUMR English sentences v2.0
AnCast0.817
6
Scene Graph ParsingRandom (test)
Set Match28.45
6
Scene Graph ParsingLength (test)
Set Match1.22e+3
6
AMR-to-text generationAMR 3.0 (test)
BLEU49.2
5
Scene Graph ParsingFACTUAL (test)
Completeness0.31
5
Showing 9 of 9 rows

Other info

Code

Follow for update