Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling

About

We introduce SOLAR 10.7B, a large language model (LLM) with 10.7 billion parameters, demonstrating superior performance in various natural language processing (NLP) tasks. Inspired by recent efforts to efficiently up-scale LLMs, we present a method for scaling LLMs called depth up-scaling (DUS), which encompasses depthwise scaling and continued pretraining. In contrast to other LLM up-scaling methods that use mixture-of-experts, DUS does not require complex changes to train and inference efficiently. We show experimentally that DUS is simple yet effective in scaling up high-performance LLMs from small ones. Building on the DUS model, we additionally present SOLAR 10.7B-Instruct, a variant fine-tuned for instruction-following capabilities, surpassing Mixtral-8x7B-Instruct. SOLAR 10.7B is publicly available under the Apache 2.0 license, promoting broad access and application in the LLM field.

Dahyun Kim, Chanjun Park, Sanghoon Kim, Wonsung Lee, Wonho Song, Yunsu Kim, Hyeonwoo Kim, Yungi Kim, Hyeonju Lee, Jihoo Kim, Changbae Ahn, Seonghoon Yang, Sukyung Lee, Hyunbyung Park, Gyoungjin Gim, Mikyoung Cha, Hwalsuk Lee, Sunghun Kim• 2023

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy25.16
1460
Code GenerationHumanEval
Pass@12.44
850
Multi-task Language UnderstandingMMLU
Accuracy31.05
842
Language ModelingWikiText-103 (test)
Perplexity9.68
524
Boolean Question AnsweringBoolQ
Accuracy61.16
307
Question AnsweringARC-E
Accuracy37.1
242
Question AnsweringBoolQ
Accuracy61.53
240
Commonsense ReasoningWinoGrande
Accuracy60.22
231
Question AnsweringTriviaQA
Accuracy47.72
210
Question AnsweringARC-C
Accuracy24.25
166
Showing 10 of 25 rows

Other info

Follow for update