Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

A Simple and Effective Pruning Approach for Large Language Models

About

As their size increases, Large Languages Models (LLMs) are natural candidates for network pruning methods: approaches that drop a subset of network weights while striving to preserve performance. Existing methods, however, require either retraining, which is rarely affordable for billion-scale LLMs, or solving a weight reconstruction problem reliant on second-order information, which may also be computationally expensive. In this paper, we introduce a novel, straightforward yet effective pruning method, termed Wanda (Pruning by Weights and activations), designed to induce sparsity in pretrained LLMs. Motivated by the recent observation of emergent large magnitude features in LLMs, our approach prunes weights with the smallest magnitudes multiplied by the corresponding input activations, on a per-output basis. Notably, Wanda requires no retraining or weight update, and the pruned LLM can be used as is. We conduct a thorough evaluation of our method Wanda on LLaMA and LLaMA-2 across various language benchmarks. Wanda significantly outperforms the established baseline of magnitude pruning and performs competitively against recent method involving intensive weight update. Code is available at https://github.com/locuslab/wanda.

Mingjie Sun, Zhuang Liu, Anna Bair, J. Zico Kolter• 2023

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity8.64
2839
Language ModelingWikiText-2 (test)
PPL8.27
1949
Commonsense ReasoningHellaSwag
Accuracy80.82
1891
Language ModelingWikiText-2
Perplexity (PPL)3.98
1624
Visual Question AnsweringVizWiz
Accuracy61
1525
Object Hallucination EvaluationPOPE
Accuracy88.33
1455
Language ModelingC4
Perplexity37.35
1422
Visual Question AnsweringTextVQA--
1285
Commonsense ReasoningWinoGrande
Accuracy56.43
1085
Language ModelingC4
Perplexity1
1071
Showing 10 of 132 rows
...

Other info

Follow for update