MegaScience: Pushing the Frontiers of Post-Training Datasets for Science Reasoning
About
Scientific reasoning is critical for developing AI scientists and supporting human researchers in advancing the frontiers of natural science discovery. However, the open-source community has primarily focused on mathematics and coding while neglecting the scientific domain, largely due to the absence of open, large-scale, high-quality, verifiable scientific reasoning datasets. To bridge this gap, we first present TextbookReasoning, an open dataset featuring truthful reference answers extracted from 12k university-level scientific textbooks, comprising 650k reasoning questions spanning 7 scientific disciplines. We further introduce MegaScience, a large-scale mixture of high-quality open-source datasets totaling 1.25 million instances, developed through systematic ablation studies that evaluate various data selection methodologies to identify the optimal subset for each publicly available scientific dataset. Meanwhile, we build a comprehensive evaluation system covering diverse subjects and question types across 15 benchmarks, incorporating comprehensive answer extraction strategies to ensure accurate evaluation metrics. Our experiments demonstrate that our datasets achieve superior performance and training efficiency with more concise response lengths compared to existing open-source scientific datasets. Furthermore, we train Llama3.1, Qwen2.5, and Qwen3 series base models on MegaScience, which significantly outperform the corresponding official instruct models in average performance. In addition, MegaScience exhibits greater effectiveness for larger and stronger models, suggesting a scaling benefit for scientific tuning. We release our data curation pipeline, evaluation system, datasets, and seven trained models to the community to advance scientific reasoning research.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Mathematical Reasoning | OlympiadBench Math | Accuracy57.6 | 84 | |
| Mathematical Reasoning | Omni-MATH | Accuracy35.8 | 68 | |
| Medical Knowledge Question Answering | Medical Domain (MedQA, MMLU, MedMCQA) (test) | MedQA Score61 | 45 | |
| Mathematical Reasoning | HMMT 2025 | Accuracy12.1 | 38 | |
| Mathematical Reasoning | AIME 2025 | Accuracy17.9 | 37 | |
| Scientific Reasoning | GPQA Diamond | Pass@10.505 | 32 | |
| Legal Knowledge Evaluation | Legal Domain CaseHOLD, MMLU-L | CaseHOLD Score60.5 | 26 | |
| Scientific Reasoning | SuperGPQA | Mean@144.4 | 24 | |
| Scientific Reasoning | MMLU-Pro | Pass@171.9 | 17 | |
| Scientific Reasoning | GPQA General | Pass@123.6 | 17 |