Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LoRAPrune: Structured Pruning Meets Low-Rank Parameter-Efficient Fine-Tuning

About

Large Language Models (LLMs), such as LLaMA and T5, have shown exceptional performance across various tasks through fine-tuning. Although low-rank adaption (LoRA) has emerged to cheaply fine-tune these LLMs on downstream tasks, their deployment is still hindered by the vast model scale and computational costs. Post-training model pruning offers a way to compress LLMs. However, the current pruning methods designed for LLMs are not compatible with LoRA. This is due to their utilization of unstructured pruning on LLMs, impeding the merging of LoRA weights, or their dependence on the gradients of pre-trained weights to guide pruning, which can impose significant memory overhead. To this end, we propose LoRAPrune, a new framework that delivers an accurate structured pruned model in a highly memory-efficient manner. Specifically, we first design a LoRA-guided pruning criterion, which uses the weights and gradients of LoRA, rather than the gradients of pre-trained weights for importance estimation. We subsequently integrate this criterion into an iterative pruning process, effectively removing redundant channels and heads. Extensive experimental results demonstrate the superior performance of our LoRAPrune over existing approaches on the LLaMA series models. At a 50\% compression rate, LoRAPrune demonstrates superior performance over LLM-Pruner, achieving a reduction in perplexity by 4.81 on WikiText2 and 3.46 on PTB, while also decreasing memory usage by 52.6%. Besides, LoRAPrune also matches semi-structural pruning across multiple LLMs, proving its wide applicability. The code is available at https://github.com/aim-uofa/LoRAPrune.

Mingyang Zhang, Hao Chen, Chunhua Shen, Zhen Yang, Linlin Ou, Xinyi Yu, Bohan Zhuang• 2023

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity7.63
1875
Language ModelingWikiText-2
Perplexity (PPL)16.8
841
Language ModelingPTB
Perplexity11.87
650
ClassificationZero-shot Evaluation Suite (BoolQ, PIQA, HellaSwag, WinoGrande, ARC-e, ARC-c, OBQA)
Average Accuracy (Zero-Shot Suite)60.05
59
Commonsense ReasoningCommonsense Reasoning Suite BoolQ, PIQA, HellaS, WinoG, ARC-e, ARC-c, OBQA
Average Accuracy60.06
37
Zero-shot ClassificationClassification Datasets (MMLU, OBQA, ARC-e, WinoGrande, ARC-c, PIQA, HellaSwag)
MMLU (5-shot)36.81
18
Showing 6 of 6 rows

Other info

Code

Follow for update