Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Transition-based Parsing with Stack-Transformers

About

Modeling the parser state is key to good performance in transition-based parsing. Recurrent Neural Networks considerably improved the performance of transition-based systems by modelling the global state, e.g. stack-LSTM parsers, or local state modeling of contextualized features, e.g. Bi-LSTM parsers. Given the success of Transformer architectures in recent parsing systems, this work explores modifications of the sequence-to-sequence Transformer architecture to model either global or local parser states in transition-based parsing. We show that modifications of the cross attention mechanism of the Transformer considerably strengthen performance both on dependency and Abstract Meaning Representation (AMR) parsing tasks, particularly for smaller models or limited training data.

Ramon Fernandez Astudillo, Miguel Ballesteros, Tahira Naseem, Austin Blodgett, Radu Florian• 2020

Related benchmarks

TaskDatasetResultRank
AMR parsingLDC2017T10 AMR 2.0 (test)
Smatch80.2
168
Dependency ParsingPenn Treebank (PTB) (test)
LAS94.7
80
AMR parsingAMR 1.0 (test)
Smatch76.9
45
Showing 3 of 3 rows

Other info

Code

Follow for update