Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Cerebras-GPT: Open Compute-Optimal Language Models Trained on the Cerebras Wafer-Scale Cluster

About

We study recent research advances that improve large language models through efficient pre-training and scaling, and open datasets and tools. We combine these advances to introduce Cerebras-GPT, a family of open compute-optimal language models scaled from 111M to 13B parameters. We train Cerebras-GPT models on the Eleuther Pile dataset following DeepMind Chinchilla scaling rules for efficient pre-training (highest accuracy for a given compute budget). We characterize the predictable power-law scaling and compare Cerebras-GPT with other publicly-available models to show all Cerebras-GPT models have state-of-the-art training efficiency on both pre-training and downstream objectives. We describe our learnings including how Maximal Update Parameterization ($\mu$P) can further improve large model scaling, improving accuracy and hyperparameter predictability at scale. We release our pre-trained models and code, making this paper the first open and reproducible work comparing compute-optimal model scaling to models trained on fixed dataset sizes. Cerebras-GPT models are available on HuggingFace: https://huggingface.co/cerebras.

Nolan Dey, Gurpreet Gosal, Zhiming (Charles) Chen, Hemant Khachane, William Marshall, Ribhu Pathria, Marvin Tom, Joel Hestness• 2023

Related benchmarks

TaskDatasetResultRank
ReasoningHellaSwag (HS)
HellaSwag Accuracy38.6
162
ReasoningARC-C--
80
Zero/Few-shot Language ModelingStandard Downstream Tasks (arc-c, arc-e, boolq, hellaswag, piqa, siqa, winogrande)
ARC-C Accuracy36.01
55
ReasoningOBQA
Accuracy20.6
26
Showing 4 of 4 rows

Other info

Follow for update