Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LMSpell: Neural Spell Checking for Low-Resource Languages

About

Spell correction is still a challenging problem for low-resource languages (LRLs). While pretrained language models (PLMs) have been employed for spell correction, their use is still limited to a handful of languages, and there has been no proper comparison across PLMs. We present the first empirical study on the effectiveness of PLMs for spell correction, which includes LRLs. We find that Large Language Models (LLMs) outperform their counterparts (encoder-based and encoder-decoder) when the fine-tuning dataset is large. This observation holds even in languages for which the LLM is not pre-trained. We release LMSpell, an easy- to use spell correction toolkit across PLMs. It includes an evaluation function that compensates for the hallucination of LLMs. Further, we present a case study with Sinhala to shed light on the plight of spell correction for LRLs.

Akesh Gunathilake, Nadil Karunarathna, Tharusha Bandaranayake, Nisansa de Silva, Surangika Ranathunga• 2025

Related benchmarks

TaskDatasetResultRank
Spell CorrectionAZ Azerbaijani
Det Rate8.31
6
Spell CorrectionBG (Bulgarian)
Det Rate5.92
6
Spell CorrectionFR French
Det Rate88.19
6
Spell CorrectionHI Hindi
Det Rate89.2
6
Spell CorrectionKO Korean
Det Rate0.9732
6
Spell CorrectionSI Sinhala
Det Rate82.59
6
Spell CorrectionVI Vietnamese
Detection Rate71.85
6
Showing 7 of 7 rows

Other info

Follow for update