Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

The Falcon Series of Open Language Models

About

We introduce the Falcon series: 7B, 40B, and 180B parameters causal decoder-only models trained on a diverse high-quality corpora predominantly assembled from web data. The largest model, Falcon-180B, has been trained on over 3.5 trillion tokens of text--the largest openly documented pretraining run. Falcon-180B significantly outperforms models such as PaLM or Chinchilla, and improves upon concurrently developed models such as LLaMA 2 or Inflection-1. It nears the performance of PaLM-2-Large at a reduced pretraining and inference cost, making it, to our knowledge, one of the three best language models in the world along with GPT-4 and PaLM-2-Large. We report detailed evaluations, as well as a deep dive into the methods and custom tooling employed to pretrain Falcon. Notably, we report on our custom distributed training codebase, allowing us to efficiently pretrain these models on up to 4,096 A100s on cloud AWS infrastructure with limited interconnect. We release a 600B tokens extract of our web dataset, as well as the Falcon-7/40/180B models under a permissive license to foster open-science and accelerate the development of an open ecosystem of large language models.

Ebtesam Almazrouei, Hamza Alobeidli, Abdulaziz Alshamsi, Alessandro Cappelli, Ruxandra Cojocaru, M\'erouane Debbah, \'Etienne Goffinet, Daniel Hesslow, Julien Launay, Quentin Malartic, Daniele Mazzotta, Badreddine Noune, Baptiste Pannier, Guilherme Penedo• 2023

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy89
1460
Multi-task Language UnderstandingMMLU
Accuracy25.2
842
Commonsense ReasoningWinoGrande
Accuracy68.9
776
Question AnsweringARC Challenge
Accuracy87.8
749
Commonsense ReasoningPIQA
Accuracy84.9
647
Question AnsweringOpenBookQA
Accuracy76.4
465
Code GenerationHumanEval (test)
Pass@135.4
444
Question AnsweringARC Easy
Normalized Acc70.4
385
Natural Language InferenceRTE
Accuracy80.1
367
Physical Commonsense ReasoningPIQA
Accuracy78.5
329
Showing 10 of 44 rows

Other info

Code

Follow for update