Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Goldfish: Monolingual Language Models for 350 Languages

About

For many low-resource languages, the only available language models are large multilingual models trained on many languages simultaneously. Despite state-of-the-art performance on reasoning tasks, we find that these models still struggle with basic grammatical text generation in many languages. First, large multilingual models perform worse than bigrams for many languages (e.g. 24% of languages in XGLM 4.5B; 43% in BLOOM 7.1B) using FLORES perplexity as an evaluation metric. Second, when we train small monolingual models with only 125M parameters on 1GB or less data for 350 languages, these small models outperform large multilingual models both in perplexity and on a massively multilingual grammaticality benchmark. To facilitate future work on low-resource language modeling, we release Goldfish, a suite of over 1,000 small monolingual language models trained comparably for 350 languages. These models represent the first publicly-available monolingual language models for 215 of the languages included.

Tyler A. Chang, Catherine Arnett, Zhuowen Tu, Benjamin K. Bergen• 2024

Related benchmarks

TaskDatasetResultRank
Reading ComprehensionBelebele
Accuracy28.2
39
Story ReasoningXStoryCloze
Accuracy52.3
35
Commonsense ReasoningXCOPA
Accuracy55.1
32
Commonsense ReasoningXStoryCloze--
32
Language ModelingFlores-200 (test)
Mean Perplexity76.9
12
Language ModelingFlores-200
Perplexity Win Rate202
9
Linguistic KnowledgeMultiBLiMP (avg)
Accuracy78.8
8
Causal ReasoningXCOPA--
8
Reading ComprehensionBelebele
Accuracy (Estonian)29.33
6
Showing 9 of 9 rows

Other info

Follow for update