Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Translate, then Parse! A strong baseline for Cross-Lingual AMR Parsing

About

In cross-lingual Abstract Meaning Representation (AMR) parsing, researchers develop models that project sentences from various languages onto their AMRs to capture their essential semantic structures: given a sentence in any language, we aim to capture its core semantic content through concepts connected by manifold types of semantic relations. Methods typically leverage large silver training data to learn a single model that is able to project non-English sentences to AMRs. However, we find that a simple baseline tends to be over-looked: translating the sentences to English and projecting their AMR with a monolingual AMR parser (translate+parse,T+P). In this paper, we revisit this simple two-step base-line, and enhance it with a strong NMT system and a strong AMR parser. Our experiments show that T+P outperforms a recent state-of-the-art system across all tested languages: German, Italian, Spanish and Mandarin with +14.6, +12.6, +14.3 and +16.0 Smatch points.

Sarah Uhrig, Yoalli Rezepka Garcia, Juri Opitz, Anette Frank• 2021

Related benchmarks

TaskDatasetResultRank
Cross-lingual AMR ParsingAMR Spanish (ES) human-translated 2.0 (test)
Smatch Score72.3
15
Cross-lingual AMR ParsingAMR German (DE) human-translated 2.0 (test)
Smatch0.676
15
Cross-lingual AMR ParsingAMR Italian (IT) human-translated 2.0 (test)
Smatch Score70.7
15
Cross-lingual AMR ParsingAMR Chinese (ZH) human-translated 2.0 (test)
Smatch59.1
13
Showing 4 of 4 rows

Other info

Follow for update