Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SlimGPT: Layer-wise Structured Pruning for Large Language Models

About

Large language models (LLMs) have garnered significant attention for their remarkable capabilities across various domains, whose vast parameter scales present challenges for practical deployment. Structured pruning is an effective method to balance model performance with efficiency, but performance restoration under computational resource constraints is a principal challenge in pruning LLMs. Therefore, we present a low-cost and fast structured pruning method for LLMs named SlimGPT based on the Optimal Brain Surgeon framework. We propose Batched Greedy Pruning for rapid and near-optimal pruning, which enhances the accuracy of head-wise pruning error estimation through grouped Cholesky decomposition and improves the pruning efficiency of FFN via Dynamic Group Size, thereby achieving approximate local optimal pruning results within one hour. Besides, we explore the limitations of layer-wise pruning from the perspective of error accumulation and propose Incremental Pruning Ratio, a non-uniform pruning strategy to reduce performance degradation. Experimental results on the LLaMA benchmark show that SlimGPT outperforms other methods and achieves state-of-the-art results.

Gui Ling, Ziyang Wang, Yuliang Yan, Qingwen Liu• 2024

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity11.59
1875
Language ModelingWikiText-2 (test)
PPL17.73
1541
Multi-task Language UnderstandingMMLU--
842
Language ModelingWikiText-2
Perplexity (PPL)16.68
841
Multitask Language UnderstandingMMLU (test)
Accuracy35.4
303
Commonsense ReasoningCommon Sense Reasoning Tasks
Avg Score65.75
241
Language UnderstandingMMLU 5-shot (test)
Accuracy39.4
149
Language UnderstandingMMLU 0-shot
Accuracy47.44
110
Long-context UnderstandingLongBench (test)--
80
Commonsense ReasoningCommonsense Reasoning
Accuracy64.27
44
Showing 10 of 20 rows

Other info

Follow for update