The Pile: An 800GB Dataset of Diverse Text for Language Modeling
About
Recent work has demonstrated that increased training dataset diversity improves general cross-domain knowledge and downstream generalization capability for large-scale language models. With this in mind, we present \textit{the Pile}: an 825 GiB English text corpus targeted at training large-scale language models. The Pile is constructed from 22 diverse high-quality subsets -- both existing and newly constructed -- many of which derive from academic or professional sources. Our evaluation of the untuned performance of GPT-2 and GPT-3 on the Pile shows that these models struggle on many of its components, such as academic writing. Conversely, models trained on the Pile improve significantly over both Raw CC and CC-100 on all components of the Pile, while improving performance on downstream evaluations. Through an in-depth exploratory analysis, we document potentially concerning aspects of the data for prospective users. We make publicly available the code used in its construction.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Language Understanding | General Understanding Tasks ARC-E, BoolQ, Wino., PIQA, HellaSwag, TruthfulQA, OBQA, LogiQA | ARC-E Accuracy60.5 | 8 | |
| Question Answering | Specialized Knowledge Tasks (ARC-C, SciQ, PubMedQA, MathQA, MMLU) zero-shot | ARC-C26.1 | 8 | |
| Language Model Evaluation | 1.3B LLM Leaderboard | ARC32.7 | 5 |