Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

LLM-Pruner: On the Structural Pruning of Large Language Models

About

Large language models (LLMs) have shown remarkable capabilities in language understanding and generation. However, such impressive capability typically comes with a substantial model size, which presents significant challenges in both the deployment, inference, and training stages. With LLM being a general-purpose task solver, we explore its compression in a task-agnostic manner, which aims to preserve the multi-task solving and language generation ability of the original LLM. One challenge to achieving this is the enormous size of the training corpus of LLM, which makes both data transfer and model post-training over-burdensome. Thus, we tackle the compression of LLMs within the bound of two constraints: being task-agnostic and minimizing the reliance on the original training dataset. Our method, named LLM-Pruner, adopts structural pruning that selectively removes non-critical coupled structures based on gradient information, maximally preserving the majority of the LLM's functionality. To this end, the performance of pruned models can be efficiently recovered through tuning techniques, LoRA, in merely 3 hours, requiring only 50K data. We validate the LLM-Pruner on three LLMs, including LLaMA, Vicuna, and ChatGLM, and demonstrate that the compressed models still exhibit satisfactory capabilities in zero-shot classification and generation. The code is available at: https://github.com/horseee/LLM-Pruner

Xinyin Ma, Gongfan Fang, Xinchao Wang• 2023

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity8.14
2839
Language ModelingWikiText-2 (test)
PPL9.88
1949
Commonsense ReasoningHellaSwag
Accuracy90.4
1891
Language ModelingWikiText-2
Perplexity (PPL)11.58
1624
Visual Question AnsweringVizWiz
Accuracy20.85
1525
Object Hallucination EvaluationPOPE--
1455
Commonsense ReasoningWinoGrande
Accuracy81.5
1085
Language ModelingPTB
Perplexity12.38
1034
Question AnsweringARC Challenge
Accuracy44.54
906
Multi-task Language UnderstandingMMLU
Accuracy48.37
876
Showing 10 of 136 rows
...

Other info

Code

Follow for update