Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

BERTIN: Efficient Pre-Training of a Spanish Language Model using Perplexity Sampling

About

The pre-training of large language models usually requires massive amounts of resources, both in terms of computation and data. Frequently used web sources such as Common Crawl might contain enough noise to make this pre-training sub-optimal. In this work, we experiment with different sampling methods from the Spanish version of mC4, and present a novel data-centric technique which we name $\textit{perplexity sampling}$ that enables the pre-training of language models in roughly half the amount of steps and using one fifth of the data. The resulting models are comparable to the current state-of-the-art, and even achieve better results for certain tasks. Our work is proof of the versatility of Transformers, and paves the way for small teams to train their models on a limited budget. Our models are available at this $\href{https://huggingface.co/bertin-project}{URL}$.

Javier de la Rosa, Eduardo G. Ponferrada, Paulo Villegas, Pablo Gonzalez de Prado Salas, Manu Romero, Mar{\i}a Grandury• 2022

Related benchmarks

TaskDatasetResultRank
Medical Question AnsweringPubMedQA Synthetic NIID 1.0 (test)
Accuracy68.4
7
Algebraic Question AnsweringAQUA-RAT Synthetic IID 1.0 (test)
Accuracy22.4
7
Algebraic Question AnsweringAQUA-RAT Synthetic NIID 1.0 (test)
Accuracy21.7
7
Medical Question AnsweringPubMedQA Synthetic IID 1.0 (test)
Accuracy70.3
7
Molecular Science InstructionsMol-Instructions Synthetic IID 1.0 (test)
BertScore0.809
7
Molecular Science InstructionsMol-Instructions Synthetic NIID 1.0 (test)
BertScore0.804
7
Instruction FollowingFed-WildChat Real Dataset 1.0 (test)
MT-Bench Score4.525
6
Financial Question AnsweringFIQA Synthetic NIID 1.0 (test)
Win Rate54.4
6
Financial Question AnsweringFIQA Synthetic IID 1.0 (test)
Win Rate43.7
6
Showing 9 of 9 rows

Other info

Follow for update