Goldfish: Monolingual Language Models for 350 Languages
About
For many low-resource languages, the only available language models are large multilingual models trained on many languages simultaneously. Despite state-of-the-art performance on reasoning tasks, we find that these models still struggle with basic grammatical text generation in many languages. First, large multilingual models perform worse than bigrams for many languages (e.g. 24% of languages in XGLM 4.5B; 43% in BLOOM 7.1B) using FLORES perplexity as an evaluation metric. Second, when we train small monolingual models with only 125M parameters on 1GB or less data for 350 languages, these small models outperform large multilingual models both in perplexity and on a massively multilingual grammaticality benchmark. To facilitate future work on low-resource language modeling, we release Goldfish, a suite of over 1,000 small monolingual language models trained comparably for 350 languages. These models represent the first publicly-available monolingual language models for 215 of the languages included.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Reading Comprehension | Belebele | Accuracy28.2 | 39 | |
| Story Reasoning | XStoryCloze | Accuracy52.3 | 35 | |
| Commonsense Reasoning | XCOPA | Accuracy55.1 | 32 | |
| Commonsense Reasoning | XStoryCloze | -- | 32 | |
| Language Modeling | Flores-200 (test) | Mean Perplexity76.9 | 12 | |
| Language Modeling | Flores-200 | Perplexity Win Rate202 | 9 | |
| Linguistic Knowledge | MultiBLiMP (avg) | Accuracy78.8 | 8 | |
| Causal Reasoning | XCOPA | -- | 8 | |
| Reading Comprehension | Belebele | Accuracy (Estonian)29.33 | 6 |