Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A Simple Recipe for Multilingual Grammatical Error Correction

About

This paper presents a simple recipe to train state-of-the-art multilingual Grammatical Error Correction (GEC) models. We achieve this by first proposing a language-agnostic method to generate a large number of synthetic examples. The second ingredient is to use large-scale multilingual language models (up to 11B parameters). Once fine-tuned on language-specific supervised sets we surpass the previous state-of-the-art results on GEC benchmarks in four languages: English, Czech, German and Russian. Having established a new set of baselines for GEC, we make our results easily reproducible and accessible by releasing a cLang-8 dataset. It is produced by using our best model, which we call gT5, to clean the targets of a widely used yet noisy lang-8 dataset. cLang-8 greatly simplifies typical GEC training pipelines composed of multiple fine-tuning stages -- we demonstrate that performing a single fine-tuning step on cLang-8 with the off-the-shelf language models yields further accuracy improvements over an already top-performing gT5 model for English.

Sascha Rothe, Jonathan Mallinson, Eric Malmi, Sebastian Krause, Aliaksei Severyn• 2021

Related benchmarks

TaskDatasetResultRank
Grammatical Error CorrectionCoNLL 2014 (test)
F0.5 Score68.9
207
Grammatical Error CorrectionBEA shared task 2019 (test)
F0.5 Score75.9
139
Grammatical Error CorrectionBEA 2019 (dev)
F0.5 Score56.21
19
Grammatical Error CorrectionRULEC-GEC Russian (test)
F0.5 Score51.62
14
Grammatical Error CorrectionBEA (dev)
Precision (%)60.9
14
Grammatical Error CorrectionBEA 2019 (test)
F0.575.9
12
Grammatical Error CorrectionBEA (test)
Precision73.2
9
Grammatical Error CorrectionGerman GEC
F0.5 Score75.96
7
Showing 8 of 8 rows

Other info

Code

Follow for update