Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Sparsity-Accelerated Training for Large Language Models

About

Large language models (LLMs) have demonstrated proficiency across various natural language processing (NLP) tasks but often require additional training, such as continual pre-training and supervised fine-tuning. However, the costs associated with this, primarily due to their large parameter count, remain high. This paper proposes leveraging \emph{sparsity} in pre-trained LLMs to expedite this training process. By observing sparsity in activated neurons during forward iterations, we identify the potential for computational speed-ups by excluding inactive neurons. We address associated challenges by extending existing neuron importance evaluation metrics and introducing a ladder omission rate scheduler. Our experiments on Llama-2 demonstrate that Sparsity-Accelerated Training (SAT) achieves comparable or superior performance to standard training while significantly accelerating the process. Specifically, SAT achieves a $45\%$ throughput improvement in continual pre-training and saves $38\%$ training time in supervised fine-tuning in practice. It offers a simple, hardware-agnostic, and easily deployable framework for additional LLM training. Our code is available at https://github.com/OpenDFM/SAT.

Da Ma, Lu Chen, Pengyu Wang, Hongshen Xu, Hanqi Li, Liangtai Sun, Su Zhu, Shuai Fan, Kai Yu• 2024

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy93.1
1460
Commonsense ReasoningWinoGrande
Accuracy83.6
776
Physical Interaction Question AnsweringPIQA
Accuracy87
323
Medical Question AnsweringMedMCQA
Accuracy59.6
253
Question AnsweringARC
Accuracy88.2
154
Question AnsweringPubMedQA
Accuracy56.7
145
Financial NLPFinGPT
Accuracy83.2
28
SummarizationBillSum
Accuracy65.7
28
Factuality and ReasoningGPT4All
HellaSwag Accuracy0.6229
12
Factuality and ReasoningMMLU
MMLU Accuracy55.4
12
Showing 10 of 14 rows

Other info

Code

Follow for update