Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

An Empirical Study of Incorporating Pseudo Data into Grammatical Error Correction

About

The incorporation of pseudo data in the training of grammatical error correction models has been one of the main factors in improving the performance of such models. However, consensus is lacking on experimental configurations, namely, choosing how the pseudo data should be generated or used. In this study, these choices are investigated through extensive experiments, and state-of-the-art performance is achieved on the CoNLL-2014 test set ($F_{0.5}=65.0$) and the official test set of the BEA-2019 shared task ($F_{0.5}=70.2$) without making any modifications to the model architecture.

Shun Kiyono, Jun Suzuki, Masato Mita, Tomoya Mizumoto, Kentaro Inui• 2019

Related benchmarks

TaskDatasetResultRank
Grammatical Error CorrectionCoNLL 2014 (test)
F0.5 Score67.9
207
Grammatical Error CorrectionBEA shared task 2019 (test)
F0.5 Score70.2
139
Grammatical Error CorrectionJFLEG
GLEU61.4
47
Grammatical Error CorrectionJFLEG (test)
GLEU61.4
45
Grammatical Error CorrectionCoNLL 2014
F0.564.7
39
Grammatical Error CorrectionCoNLL M2 14
Precision (P)72.4
27
Grammatical Error CorrectionBEA 2019 (dev)
F0.5 Score53.95
19
Grammatical Error CorrectionBEA 19
Precision74.7
12
Grammatical Error CorrectionFCE
Precision55.11
9
Showing 9 of 9 rows

Other info

Follow for update