Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling

About

How do large language models (LLMs) develop and evolve over the course of training? How do these patterns change as models scale? To answer these questions, we introduce \textit{Pythia}, a suite of 16 LLMs all trained on public data seen in the exact same order and ranging in size from 70M to 12B parameters. We provide public access to 154 checkpoints for each one of the 16 models, alongside tools to download and reconstruct their exact training dataloaders for further study. We intend \textit{Pythia} to facilitate research in many areas, and we present several case studies including novel results in memorization, term frequency effects on few-shot performance, and reducing gender bias. We demonstrate that this highly controlled setup can be used to yield novel insights toward LLMs and their training dynamics. Trained models, analysis code, training code, and training data can be found at \url{https://github.com/EleutherAI/pythia}.

Stella Biderman, Hailey Schoelkopf, Quentin Anthony, Herbie Bradley, Kyle O'Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward Raff, Aviya Skowron, Lintang Sutawika, Oskar van der Wal• 2023

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy63.8
1460
Mathematical ReasoningGSM8K
Accuracy2.4
983
Multi-task Language UnderstandingMMLU
Accuracy31.3
842
Commonsense ReasoningWinoGrande
Accuracy66.6
776
Question AnsweringARC Challenge
Accuracy44.1
749
Commonsense ReasoningPIQA
Accuracy76.7
647
Language ModelingWikiText
PPL30.32
479
Question AnsweringOpenBookQA
Accuracy45
465
Question AnsweringARC Easy
Normalized Acc71.5
385
Physical Commonsense ReasoningPIQA
Accuracy75.1
329
Showing 10 of 93 rows
...

Other info

Code

Follow for update