Encoder-Decoder Models Can Benefit from Pre-trained Masked Language Models in Grammatical Error Correction
About
This paper investigates how to effectively incorporate a pre-trained masked language model (MLM), such as BERT, into an encoder-decoder (EncDec) model for grammatical error correction (GEC). The answer to this question is not as straightforward as one might expect because the previous common methods for incorporating a MLM into an EncDec model have potential drawbacks when applied to GEC. For example, the distribution of the inputs to a GEC model can be considerably different (erroneous, clumsy, etc.) from that of the corpora used for pre-training MLMs; however, this issue is not addressed in the previous methods. Our experiments show that our proposed method, where we first fine-tune a MLM with a given GEC corpus and then use the output of the fine-tuned MLM as additional features in the GEC model, maximizes the benefit of the MLM. The best-performing model achieves state-of-the-art performances on the BEA-2019 and CoNLL-2014 benchmarks. Our code is publicly available at: https://github.com/kanekomasahiro/bert-gec.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Grammatical Error Correction | CoNLL 2014 (test) | F0.5 Score65.2 | 207 | |
| Grammatical Error Correction | BEA shared task 2019 (test) | F0.5 Score69.8 | 139 | |
| Grammatical Error Correction | JFLEG | GLEU62 | 47 | |
| Grammatical Error Correction | CoNLL M2 14 | Precision (P)72.6 | 27 | |
| Grammatical Error Correction | BEA 19 | Precision72.3 | 12 | |
| Grammatical Error Correction | FCE M2 (test) | Precision65 | 10 | |
| Grammatical Error Correction | FCGEC | EM10.88 | 9 |