Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

\"UberWeb: Insights from Multilingual Curation for a 20-Trillion-Token Dataset

About

Multilinguality is a core capability for modern foundation models, yet training high-quality multilingual models remains challenging due to uneven data availability across languages. A further challenge is the performance interference that can arise from joint multilingual training, commonly referred to as the "curse of multilinguality". We study multilingual data curation across thirteen languages and find that many reported regressions are not inherent to multilingual scaling but instead stem from correctable deficiencies in data quality and composition rather than fundamental capacity limits. In controlled bilingual experiments, improving data quality for any single language benefits others: curating English improves non-English performance in 12 of 13 languages, while curating non-English yields reciprocal improvements in English. Bespoke per-language curation produces substantially larger within-language improvements. Extending these findings to large-scale general-purpose training mixtures, we show that curated multilingual allocations comprising under 8% of total tokens remain remarkably effective. We operationalize this approach within an effort that produced a 20T-token pretraining corpus derived entirely from public sources. Models with 3B and 8B parameters trained on a 1T-token random subset achieve competitive multilingual accuracy with 4-10x fewer training FLOPs than strong public baselines, establishing a new Pareto frontier in multilingual performance versus compute. Moreover, these benefits extend to frontier model scale: the 20T-token corpus served as part of the pretraining dataset for Trinity Large (400B/A13B), which exhibits strong multilingual performance relative to its training FLOPs. These results show that targeted, per-language data curation mitigates multilingual interference and enables compute-efficient multilingual scaling.

DatologyAI: Aldo Gael Carranza, Kaleigh Mentzer, Ricardo Pio Monti, Alex Fang, Alvin Deng, Amro Abbas, Anshuman Suri, Brett Larsen, Cody Blakeney, Darren Teh, David Schwab, Diego Kiner, Fan Pan, Haakon Mongstad, Haoli Yin, Jack Urbanek, Jason Lee, Jason Telanoff, Josh Wills, Luke Merrick, Maximilian B\"other, Parth Doshi, Paul Burstein, Pratyush Maini, Rishabh Adiga, Siddharth Joshi, Spandan Das, Tony Jiang, Vineeth Dorna, Zhengping Wang, Bogdan Gaza, Ari Morcos, Matthew Leavitt• 2026

Related benchmarks

TaskDatasetResultRank
Reading ComprehensionBelebele
Average RC Score (BELEBELE)80
31
Science Question AnsweringARC-C (test)
Accuracy90
25
Language UnderstandingMMLU German (test)
Accuracy73
15
Language UnderstandingMMLU French (test)
Accuracy71
15
Science Question AnsweringARC-C German (test)
Accuracy91
15
Science Question AnsweringARC-C French (test)
Accuracy90
15
Multitask Language UnderstandingMMLU Korean (test)
Accuracy72
12
Multitask Language UnderstandingMMLU Hindi
Accuracy67
12
Reading ComprehensionBelebele Korean (test)
Accuracy90
12
Reading ComprehensionBelebele Hindi
Accuracy84
12
Showing 10 of 38 rows

Other info

Follow for update