Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

OPT: Open Pre-trained Transformer Language Models

About

Large language models, which are often trained for hundreds of thousands of compute days, have shown remarkable capabilities for zero- and few-shot learning. Given their computational cost, these models are difficult to replicate without significant capital. For the few that are available through APIs, no access is granted to the full model weights, making them difficult to study. We present Open Pre-trained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M to 175B parameters, which we aim to fully and responsibly share with interested researchers. We show that OPT-175B is comparable to GPT-3, while requiring only 1/7th the carbon footprint to develop. We are also releasing our logbook detailing the infrastructure challenges we faced, along with code for experimenting with all of the released models.

Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, Todor Mihaylov, Myle Ott, Sam Shleifer, Kurt Shuster, Daniel Simig, Punit Singh Koura, Anjali Sridhar, Tianlu Wang, Luke Zettlemoyer• 2022

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy73.47
1891
Mathematical ReasoningGSM8K
Accuracy0.9
1362
Commonsense ReasoningWinoGrande
Accuracy62.2
1085
Language ModelingPTB
Perplexity8.76
1034
Question AnsweringARC Challenge
Accuracy34
906
Multi-task Language UnderstandingMMLU
Accuracy29.6
876
Commonsense ReasoningPIQA
Accuracy73.1
751
Language ModelingWikiText
PPL14.3
732
Instruction FollowingIFEval--
625
Question AnsweringARC Easy
Accuracy59.6
597
Showing 10 of 292 rows
...

Other info

Code

Follow for update