An Empirical Study of Incorporating Pseudo Data into Grammatical Error Correction
About
The incorporation of pseudo data in the training of grammatical error correction models has been one of the main factors in improving the performance of such models. However, consensus is lacking on experimental configurations, namely, choosing how the pseudo data should be generated or used. In this study, these choices are investigated through extensive experiments, and state-of-the-art performance is achieved on the CoNLL-2014 test set ($F_{0.5}=65.0$) and the official test set of the BEA-2019 shared task ($F_{0.5}=70.2$) without making any modifications to the model architecture.
Shun Kiyono, Jun Suzuki, Masato Mita, Tomoya Mizumoto, Kentaro Inui• 2019
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Grammatical Error Correction | CoNLL 2014 (test) | F0.5 Score67.9 | 207 | |
| Grammatical Error Correction | BEA shared task 2019 (test) | F0.5 Score70.2 | 139 | |
| Grammatical Error Correction | JFLEG | GLEU61.4 | 47 | |
| Grammatical Error Correction | JFLEG (test) | GLEU61.4 | 45 | |
| Grammatical Error Correction | CoNLL 2014 | F0.564.7 | 39 | |
| Grammatical Error Correction | CoNLL M2 14 | Precision (P)72.4 | 27 | |
| Grammatical Error Correction | BEA 2019 (dev) | F0.5 Score53.95 | 19 | |
| Grammatical Error Correction | BEA 19 | Precision74.7 | 12 | |
| Grammatical Error Correction | FCE | Precision55.11 | 9 |
Showing 9 of 9 rows