Galactica: A Large Language Model for Science
About
Information overload is a major obstacle to scientific progress. The explosive growth in scientific literature and data has made it ever harder to discover useful insights in a large mass of information. Today scientific knowledge is accessed through search engines, but they are unable to organize scientific knowledge alone. In this paper we introduce Galactica: a large language model that can store, combine and reason about scientific knowledge. We train on a large scientific corpus of papers, reference material, knowledge bases and many other sources. We outperform existing models on a range of scientific tasks. On technical knowledge probes such as LaTeX equations, Galactica outperforms the latest GPT-3 by 68.2% versus 49.0%. Galactica also performs well on reasoning, outperforming Chinchilla on mathematical MMLU by 41.3% to 35.7%, and PaLM 540B on MATH with a score of 20.4% versus 8.8%. It also sets a new state-of-the-art on downstream tasks such as PubMedQA and MedMCQA dev of 77.6% and 52.9%. And despite not being trained on a general corpus, Galactica outperforms BLOOM and OPT-175B on BIG-bench. We believe these results demonstrate the potential for language models as a new interface for science. We open source the model for the benefit of the scientific community.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Code Generation | HumanEval | Pass@17.31 | 850 | |
| Language Understanding | MMLU | Accuracy37.13 | 756 | |
| Mathematical Reasoning | GSM8K (test) | Accuracy41.7 | 751 | |
| Question Answering | ARC Challenge | -- | 749 | |
| Mathematical Reasoning | MATH | Accuracy5.3 | 535 | |
| Mathematical Reasoning | MATH (test) | Overall Accuracy20.4 | 433 | |
| Question Answering | ARC Easy | Normalized Acc83.8 | 385 | |
| Multitask Language Understanding | MMLU (test) | -- | 303 | |
| Mathematical Reasoning | ASDIV | Accuracy0.409 | 221 | |
| Mathematical Reasoning | MAWPS | Accuracy51.7 | 219 |