Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data

About

Pre-training text representations have led to significant improvements in many areas of natural language processing. The quality of these models benefits greatly from the size of the pretraining corpora as long as its quality is preserved. In this paper, we describe an automatic pipeline to extract massive high-quality monolingual datasets from Common Crawl for a variety of languages. Our pipeline follows the data processing introduced in fastText (Mikolov et al., 2017; Grave et al., 2018), that deduplicates documents and identifies their language. We augment this pipeline with a filtering step to select documents that are close to high quality corpora like Wikipedia.

Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzm\'an, Armand Joulin, Edouard Grave• 2019

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag--
1460
ReasoningARC Easy--
183
Question AnsweringBoolQ
Delta Accuracy0.44
15
ReasoningARC Hard
Accuracy Improvement0.6
12
ReasoningWinoGrande
Accuracy Improvement0.9
12
ReasoningPIQA
Accuracy Improvement0.7
12
Question AnsweringOBQA
Accuracy Improvement0.75
12
ReasoningSIQA
Accuracy Improvement0.27
12
Data FilteringFineWeb-edu CC-MAIN-2024-10
Recall@3055.3
7
Showing 9 of 9 rows

Other info

Follow for update