Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Zyda-2: a 5 Trillion Token High-Quality Dataset

About

In this technical report, we present Zyda-2: a five trillion token dataset for language model pretraining. Zyda-2 was used to train our Zamba2 series of models which are state-of-the-art for their weight class. We build Zyda-2 by collating high-quality open-source tokens such as FineWeb and DCLM, then distilling them to the highest-quality subset via cross-deduplication and model-based quality filtering. Zyda-2 is released under a permissive open license, and is available at https://huggingface.co/datasets/Zyphra/Zyda-2

Yury Tokpanov, Paolo Glorioso, Quentin Anthony, Beren Millidge• 2024

Related benchmarks

TaskDatasetResultRank
Logic reasoningARC-Challenge & LogiQA OpenCompass (test)
ARC-C Accuracy36.61
4
Showing 1 of 1 rows

Other info

Follow for update