Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Stable LM 2 1.6B Technical Report

About

We introduce StableLM 2 1.6B, the first in a new generation of our language model series. In this technical report, we present in detail the data and training procedure leading to the base and instruction-tuned versions of StableLM 2 1.6B. The weights for both models are available via Hugging Face for anyone to download and use. The report contains thorough evaluations of these models, including zero- and few-shot benchmarks, multilingual benchmarks, and the MT benchmark focusing on multi-turn dialogues. At the time of publishing this report, StableLM 2 1.6B was the state-of-the-art open model under 2B parameters by a significant margin. Given its appealing small size, we also provide throughput measurements on a number of edge devices. In addition, we open source several quantized checkpoints and provide their performance metrics compared to the original model.

Marco Bellagente, Jonathan Tow, Dakota Mahan, Duy Phung, Maksym Zhuravinskyi, Reshinth Adithyan, James Baicoianu, Ben Brooks, Nathan Cooper, Ashish Datta, Meng Lee, Emad Mostaque, Michael Pieler, Nikhil Pinnaparju, Paulo Rocha, Harry Saini, Hannah Teufel, Niccolo Zanichelli, Carlos Riquelme• 2024

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy19.3
1362
Multi-task Language UnderstandingMMLU
Accuracy36
876
Question AnsweringOpenBookQA
Accuracy37
465
ReasoningHellaSwag (HS)
HellaSwag Accuracy66.7
162
ReasoningPIQA
Accuracy76.8
145
ReasoningWinoGrande (WG)
Accuracy59.2
135
Question AnsweringCommonsenseQA (CSQA)
Accuracy34.6
124
ReasoningARC
Accuracy53.5
94
ReasoningSIQA
Accuracy43.5
44
Trivia QATrivia QA
Accuracy35.6
32
Showing 10 of 12 rows

Other info

Follow for update