Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

AMR Parsing via Graph-Sequence Iterative Inference

About

We propose a new end-to-end model that treats AMR parsing as a series of dual decisions on the input sequence and the incrementally constructed graph. At each time step, our model performs multiple rounds of attention, reasoning, and composition that aim to answer two critical questions: (1) which part of the input \textit{sequence} to abstract; and (2) where in the output \textit{graph} to construct the new concept. We show that the answers to these two questions are mutually causalities. We design a model based on iterative inference that helps achieve better answers in both perspectives, leading to greatly improved parsing accuracy. Our experimental results significantly outperform all previously reported \textsc{Smatch} scores by large margins. Remarkably, without the help of any large-scale pre-trained language model (e.g., BERT), our model already surpasses previous state-of-the-art using BERT. With the help of BERT, we can push the state-of-the-art results to 80.2\% on LDC2017T10 (AMR 2.0) and 75.4\% on LDC2014T12 (AMR 1.0).

Deng Cai, Wai Lam• 2020

Related benchmarks

TaskDatasetResultRank
AMR parsingLDC2017T10 AMR 2.0 (test)
Smatch80.2
168
AMR parsingAMR 1.0 (test)
Smatch75.4
45
AMR parsingAMR 1.0 LDC2014T12 (test)
SMATCH F175.4
23
AMR parsingAMR 3.0 LDC2020T02 (test)
Smatch Labeled76.8
14
AMR parsingBio
Smatch42.22
8
AMR parsingNew3
Smatch60.81
8
AMR parsingLittle Prince
Smatch71.03
8
Showing 7 of 7 rows

Other info

Code

Follow for update