Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Mistral 7B

About

We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered for superior performance and efficiency. Mistral 7B outperforms Llama 2 13B across all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and code generation. Our model leverages grouped-query attention (GQA) for faster inference, coupled with sliding window attention (SWA) to effectively handle sequences of arbitrary length with a reduced inference cost. We also provide a model fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses the Llama 2 13B -- Chat model both on human and automated benchmarks. Our models are released under the Apache 2.0 license.

Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, L\'elio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timoth\'ee Lacroix, William El Sayed• 2023

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy85.7
1891
Language ModelingWikiText-2
Perplexity (PPL)12.25
1624
Mathematical ReasoningGSM8K
Accuracy58.4
1362
Automatic Speech RecognitionLibriSpeech clean (test)
WER20
1156
Automatic Speech RecognitionLibriSpeech (test-other)
WER25.8
1151
Commonsense ReasoningWinoGrande
Accuracy75.3
1085
Code GenerationHumanEval
Pass@139.02
1036
Language ModelingPTB
Perplexity49.51
1034
Question AnsweringARC Challenge
Accuracy59.98
906
Mathematical ReasoningMATH
Accuracy12.88
882
Showing 10 of 779 rows
...

Other info

Code

Follow for update