Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Incorporating Graph Information in Transformer-based AMR Parsing

About

Abstract Meaning Representation (AMR) is a Semantic Parsing formalism that aims at providing a semantic graph abstraction representing a given text. Current approaches are based on autoregressive language models such as BART or T5, fine-tuned through Teacher Forcing to obtain a linearized version of the AMR graph from a sentence. In this paper, we present LeakDistill, a model and method that explores a modification to the Transformer architecture, using structural adapters to explicitly incorporate graph information into the learned representations and improve AMR parsing performance. Our experiments show how, by employing word-to-node alignment to embed graph structural information into the encoder at training time, we can obtain state-of-the-art AMR parsing through self-knowledge distillation, even without the use of additional data. We release the code at \url{http://www.github.com/sapienzanlp/LeakDistill}.

Pavlo Vasylenko, Pere-Llu\'is Huguet Cabot, Abelardo Carlos Mart\'inez Lorenzo, Roberto Navigli• 2023

Related benchmarks

TaskDatasetResultRank
AMR parsingLDC2017T10 AMR 2.0 (test)
Smatch86.1
168
AMR parsingAMR 3.0 (test)
SMATCH84.6
45
AMR parsingBioAMR (test)
Smatch Score64.5
17
Text-to-UMR ParsingUMR English sentences v2.0
AnCast0.7811
6
AMR parsingTLP (test)
Smatch Score82.6
5
Showing 5 of 5 rows

Other info

Code

Follow for update