Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

HPLT 3.0: Very Large-Scale Multilingual Resources for LLM and MT. Mono- and Bi-lingual Data, Multilingual Evaluation, and Pre-Trained Models

About

We present an ongoing initiative to provide open, very large, high-quality, and richly annotated textual datasets for almost 200 languages. At 30 trillion tokens, this is likely the largest generally available multilingual collection of LLM pre-training data. These datasets are derived from web crawls from different sources and accompanied with a complete, open-source pipeline for document selection from web archives, text extraction from HTML, language identification for noisy texts, exact and near-deduplication, annotation with, among others, register labels, text quality estimates, and personally identifiable information; and final selection and filtering. We report on data quality probes through contrastive and analytical statistics, through manual inspection of samples for 24 languages, and through end-to-end evaluation of various language model architectures trained on this data. For multilingual LLM evaluation, we provide a comprehensive collection of benchmarks for nine European languages, with special emphasis on natively created tasks, mechanisms to mitigate prompt sensitivity, and refined normalization and aggregation of scores. Additionally, we train and evaluate a family of 57 monolingual encoder-decoder models, as well as a handful of monolingual GPT-like reference models. Besides the monolingual data and models, we also present a very large collection of parallel texts automatically mined from this data, together with a novel parallel corpus synthesized via machine translation.

Stephan Oepen, Nikolay Arefev, Mikko Aulamo, Marta Ba\~n\'on, Maja Buljan, Laurie Burchell, Lucas Charpentier, Pinzhen Chen, Mariya Fedorova, Ona de Gibert, Barry Haddow, Jan Haji\v{c}, Jind\v{r}ich Helcl, Andrey Kutuzov, Veronika Laippala, Zihao Li, Risto Luukkonen, Bhavitvya Malik, Vladislav Mikhailov, Amanda Myntti, Dayy\'an O'Brien, Lucie Pol\'akov\'a, Sampo Pyysalo, Gema Ram\'irez S\'anchez, Janine Siewert, Pavel Stepachev, J\"org Tiedemann, Teemu Vahtola, Du\v{s}an Vari\v{s}, Fedor Vitiugin, Tea Vojt\v{e}chov\'a, Jaume Zaragoza• 2025

Related benchmarks

TaskDatasetResultRank
Language IdentificationSLIDE
Loose Accuracy95.63
8
Language IdentificationNordic DSL 50k
Loose Accuracy94.32
8
Language IdentificationFLORES+ (devtest)
Loose Accuracy99.97
8
Language IdentificationFastSpell n=6,809 (excluding Nynorsk)
FPR0.39
7
Language IdentificationUDHR n=10,283 (test)
FPR0.025
7
Language IdentificationFLORES+
FPR (Norwegian Bokmål)0.01
4
Language IdentificationSLIDE
Norwegian Bokmål FPR5
4
Language IdentificationNordic DSL
FPR (Norwegian Bokmål)2
4
Language IdentificationTwitter users
Bosnian FPR0.8
4
Language IdentificationParlaSent
Bosnian FPR44
4
Showing 10 of 11 rows

Other info

Follow for update