Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Fast and Effective Weight Update for Pruned Large Language Models

About

Pruning large language models (LLMs) is a challenging task due to their enormous size. The primary difficulty is fine-tuning the model after pruning, which is needed to recover the lost performance caused by dropping weights. Recent approaches have either ignored fine-tuning entirely, focusing on efficient pruning criteria, or attempted layer-wise weight updates, preserving the behavior of each layer. However, even layer-wise weight updates can be costly for LLMs, and previous works have resorted to various approximations. In our paper, we propose a fast and effective weight update algorithm for pruned layers based on the Alternating Direction Method of Multipliers (ADMM). We further extend it with a simple gradual pruning mask selection and achieve state-of-the-art pruning performance across a wide range of LLMs. Code is available at https://github.com/fmfi-compbio/admm-pruning.

Vladim\'ir Bo\v{z}a• 2024

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-2 (test)
PPL7.78
1541
Commonsense ReasoningHellaSwag
Accuracy53.35
1460
Question AnsweringARC Challenge
Accuracy39.68
749
Question AnsweringARC Easy
Accuracy72.77
386
Natural Language InferenceRTE
Accuracy61.37
367
Language ModelingC4
Perplexity8.11
321
Language ModelingWiki
Perplexity (PPL)5.92
251
Question AnsweringBoolQ
Accuracy76.24
240
Question AnsweringOpenBookQA
Accuracy31.4
84
Zero-shot AccuracyARC Easy
Zero-shot Acc (ARC Easy)68.18
63
Showing 10 of 24 rows

Other info

Follow for update