Incorporating Graph Information in Transformer-based AMR Parsing
About
Abstract Meaning Representation (AMR) is a Semantic Parsing formalism that aims at providing a semantic graph abstraction representing a given text. Current approaches are based on autoregressive language models such as BART or T5, fine-tuned through Teacher Forcing to obtain a linearized version of the AMR graph from a sentence. In this paper, we present LeakDistill, a model and method that explores a modification to the Transformer architecture, using structural adapters to explicitly incorporate graph information into the learned representations and improve AMR parsing performance. Our experiments show how, by employing word-to-node alignment to embed graph structural information into the encoder at training time, we can obtain state-of-the-art AMR parsing through self-knowledge distillation, even without the use of additional data. We release the code at \url{http://www.github.com/sapienzanlp/LeakDistill}.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| AMR parsing | LDC2017T10 AMR 2.0 (test) | Smatch86.1 | 168 | |
| AMR parsing | AMR 3.0 (test) | SMATCH84.6 | 45 | |
| AMR parsing | BioAMR (test) | Smatch Score64.5 | 17 | |
| Text-to-UMR Parsing | UMR English sentences v2.0 | AnCast0.7811 | 6 | |
| AMR parsing | TLP (test) | Smatch Score82.6 | 5 |