GlotLID: Language Identification for Low-Resource Languages
About
Several recent papers have published good solutions for language identification (LID) for about 300 high-resource and medium-resource languages. However, there is no LID available that (i) covers a wide range of low-resource languages, (ii) is rigorously evaluated and reliable and (iii) efficient and easy to use. Here, we publish GlotLID-M, an LID model that satisfies the desiderata of wide coverage, reliability and efficiency. It identifies 1665 languages, a large increase in coverage compared to prior work. In our experiments, GlotLID-M outperforms four baselines (CLD3, FT176, OpenLID and NLLB) when balancing F1 and false positive rate (FPR). We analyze the unique challenges that low-resource LID poses: incorrect corpus metadata, leakage from high-resource languages, difficulty separating closely related languages, handling of macrolanguage vs varieties and in general noisy data. We hope that integrating GlotLID-M into dataset creation pipelines will improve quality and enhance accessibility of NLP technology for low-resource languages and cultures. GlotLID-M model (including future versions), code, and list of data sources are available: https://github.com/cisnlp/GlotLID.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Language Identification | FLORES-200 CLD3 1.0 | F1 Score98.3 | 8 | |
| Language Identification | FLORES-200 FT176 1.0 | F1 Score99.1 | 8 | |
| Language Identification | FLORES-200 OpenLID 1.0 | F1 Score94.7 | 8 | |
| Language Identification | FLORES-200 NLLB 1.0 | F1 Score95.4 | 8 | |
| Language Identification | UDHR CLD3 1.0 | F1 Score0.952 | 8 | |
| Language Identification | UDHR FT176 1.0 | F1 Score92.7 | 8 | |
| Language Identification | UDHR OpenLID 1.0 | F1 Score92.7 | 8 | |
| Language Identification | UDHR NLLB 1.0 | F1 Score92.7 | 8 | |
| Language Identification | SLIDE | Loose Accuracy97.2 | 8 | |
| Language Identification | Nordic DSL 50k | Loose Accuracy96.19 | 8 |