Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing

About

To promote and further develop RST-style discourse parsing models, we need a strong baseline that can be regarded as a reference for reporting reliable experimental results. This paper explores a strong baseline by integrating existing simple parsing strategies, top-down and bottom-up, with various transformer-based pre-trained language models. The experimental results obtained from two benchmark datasets demonstrate that the parsing performance strongly relies on the pretrained language models rather than the parsing strategies. In particular, the bottom-up parser achieves large performance gains compared to the current best parser when employing DeBERTa. We further reveal that language models with a span-masking scheme especially boost the parsing performance through our analysis within intra- and multi-sentential parsing, and nuclearity prediction.

Naoki Kobayashi, Tsutomu Hirao, Hidetaka Kamigaito, Manabu Okumura, Masaaki Nagata• 2022

Related benchmarks

TaskDatasetResultRank
RST ParsingRST-DT original Parseval (test)
Span F178.5
28
RST Discourse ParsingRST-DT (test)
Span F178.5
12
RST ParsingInstr-DT Standard-Parseval (test)
Span Score77.8
10
RST Discourse ParsingGUM Corpus (test)
Span F174.4
10
RST Discourse ParsingInstr-DT (test)
Span F177.8
8
RST ParsingRST-DT (test)--
7
Showing 6 of 6 rows

Other info

Code

Follow for update