Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Fast and Effective Weight Update for Pruned Large Language Models

About

Pruning large language models (LLMs) is a challenging task due to their enormous size. The primary difficulty is fine-tuning the model after pruning, which is needed to recover the lost performance caused by dropping weights. Recent approaches have either ignored fine-tuning entirely, focusing on efficient pruning criteria, or attempted layer-wise weight updates, preserving the behavior of each layer. However, even layer-wise weight updates can be costly for LLMs, and previous works have resorted to various approximations. In our paper, we propose a fast and effective weight update algorithm for pruned layers based on the Alternating Direction Method of Multipliers (ADMM). We further extend it with a simple gradual pruning mask selection and achieve state-of-the-art pruning performance across a wide range of LLMs. Code is available at https://github.com/fmfi-compbio/admm-pruning.

Vladim\'ir Bo\v{z}a• 2024

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-2 (test)
PPL7.78
1949
Commonsense ReasoningHellaSwag
Accuracy53.35
1891
Language ModelingC4
Perplexity8.11
1071
Question AnsweringARC Challenge
Accuracy39.68
906
Question AnsweringARC Easy
Accuracy72.77
597
Natural Language InferenceRTE
Accuracy61.37
448
Question AnsweringBoolQ
Accuracy76.24
317
Language ModelingWiki
Perplexity (PPL)5.92
281
Question AnsweringOpenBookQA
Accuracy31.4
126
Commonsense ReasoningWinoGrande
Accuracy69.3
68
Showing 10 of 24 rows

Other info

Follow for update