Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Maximum Bayes Smatch Ensemble Distillation for AMR Parsing

About

AMR parsing has experienced an unprecendented increase in performance in the last three years, due to a mixture of effects including architecture improvements and transfer learning. Self-learning techniques have also played a role in pushing performance forward. However, for most recent high performant parsers, the effect of self-learning and silver data augmentation seems to be fading. In this paper we propose to overcome this diminishing returns of silver data by combining Smatch-based ensembling techniques with ensemble distillation. In an extensive experimental setup, we push single model English parser performance to a new state-of-the-art, 85.9 (AMR2.0) and 84.3 (AMR3.0), and return to substantial gains from silver data augmentation. We also attain a new state-of-the-art for cross-lingual AMR parsing for Chinese, German, Italian and Spanish. Finally we explore the impact of the proposed technique on domain adaptation, and show that it can produce gains rivaling those of human annotated data for QALD-9 and achieve a new state-of-the-art for BioAMR.

Young-Suk Lee, Ramon Fernandez Astudillo, Thanh Lam Hoang, Tahira Naseem, Radu Florian, Salim Roukos• 2021

Related benchmarks

TaskDatasetResultRank
AMR parsingLDC2017T10 AMR 2.0 (test)
Smatch85.9
168
AMR parsingAMR 3.0 (test)
SMATCH84.3
45
AMR parsingBioAMR (test)
Smatch Score81.3
17
Cross-lingual AMR ParsingAMR Spanish (ES) human-translated 2.0 (test)
Smatch Score77.1
15
Cross-lingual AMR ParsingAMR German (DE) human-translated 2.0 (test)
Smatch0.737
15
Cross-lingual AMR ParsingAMR Italian (IT) human-translated 2.0 (test)
Smatch Score76.1
15
Cross-lingual AMR ParsingAMR Chinese (ZH) human-translated 2.0 (test)
Smatch63
13
AMR parsingQALD-9-AMR (test)
Smatch90.1
8
AMR parsingTLP (test)
Smatch Score82.3
5
Showing 9 of 9 rows

Other info

Code

Follow for update