Cross-lingual AMR Aligner: Paying Attention to Cross-Attention
About
This paper introduces a novel aligner for Abstract Meaning Representation (AMR) graphs that can scale cross-lingually, and is thus capable of aligning units and spans in sentences of different languages. Our approach leverages modern Transformer-based parsers, which inherently encode alignment information in their cross-attention weights, allowing us to extract this information during parsing. This eliminates the need for English-specific rules or the Expectation Maximization (EM) algorithm that have been used in previous approaches. In addition, we propose a guided supervised method using alignment to further enhance the performance of our aligner. We achieve state-of-the-art results in the benchmarks for AMR alignment and demonstrate our aligner's ability to obtain them across multiple languages. Our code will be available at \href{https://www.github.com/Babelscape/AMR-alignment}{github.com/Babelscape/AMR-alignment}.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Subgraph Alignment | LEAMR 1.0 (test) | Exact Alignment Precision94.39 | 13 | |
| Relation Alignment | LEAMR 1.0 (test) | Exact Alignment P0.8803 | 6 | |
| Inter-lingual Subgraph Identification | ISI English | Precision96.3 | 5 | |
| Reentrancy Alignment | LEAMR 1.0 (test) | Exact Alignment Precision56.9 | 5 | |
| Inter-lingual Subgraph Identification | ISI Italian (IT) | Precision67.4 | 4 | |
| Inter-lingual Subgraph Identification | ISI German | Precision64 | 4 | |
| Inter-lingual Subgraph Identification | ISI Spanish (ES) | Precision67.9 | 4 |