Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Code Llama: Open Foundation Models for Code

About

We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use.

Baptiste Rozi\`ere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, J\'er\'emy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre D\'efossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve• 2023

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy62.9
1891
Mathematical ReasoningGSM8K
Accuracy58.2
1362
Commonsense ReasoningWinoGrande
Accuracy62.3
1085
Code GenerationHumanEval
Pass@165.2
1036
Mathematical ReasoningMATH--
882
Multi-task Language UnderstandingMMLU
Accuracy36.9
876
Mathematical ReasoningGSM8K (test)
Accuracy54.2
770
Code GenerationHumanEval (test)
Pass@167.8
506
Multi-turn Dialogue EvaluationMT-Bench
Overall Score5.71
447
Mathematical ReasoningMATH (test)
Overall Accuracy16.4
433
Showing 10 of 162 rows
...

Other info

Code

Follow for update