LMSpell: Neural Spell Checking for Low-Resource Languages
About
Spell correction is still a challenging problem for low-resource languages (LRLs). While pretrained language models (PLMs) have been employed for spell correction, their use is still limited to a handful of languages, and there has been no proper comparison across PLMs. We present the first empirical study on the effectiveness of PLMs for spell correction, which includes LRLs. We find that Large Language Models (LLMs) outperform their counterparts (encoder-based and encoder-decoder) when the fine-tuning dataset is large. This observation holds even in languages for which the LLM is not pre-trained. We release LMSpell, an easy- to use spell correction toolkit across PLMs. It includes an evaluation function that compensates for the hallucination of LLMs. Further, we present a case study with Sinhala to shed light on the plight of spell correction for LRLs.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Spell Correction | AZ Azerbaijani | Det Rate8.31 | 6 | |
| Spell Correction | BG (Bulgarian) | Det Rate5.92 | 6 | |
| Spell Correction | FR French | Det Rate88.19 | 6 | |
| Spell Correction | HI Hindi | Det Rate89.2 | 6 | |
| Spell Correction | KO Korean | Det Rate0.9732 | 6 | |
| Spell Correction | SI Sinhala | Det Rate82.59 | 6 | |
| Spell Correction | VI Vietnamese | Detection Rate71.85 | 6 |