Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot

About

We show for the first time that large-scale generative pretrained transformer (GPT) family models can be pruned to at least 50% sparsity in one-shot, without any retraining, at minimal loss of accuracy. This is achieved via a new pruning method called SparseGPT, specifically designed to work efficiently and accurately on massive GPT-family models. We can execute SparseGPT on the largest available open-source models, OPT-175B and BLOOM-176B, in under 4.5 hours, and can reach 60% unstructured sparsity with negligible increase in perplexity: remarkably, more than 100 billion weights from these models can be ignored at inference time. SparseGPT generalizes to semi-structured (2:4 and 4:8) patterns, and is compatible with weight quantization approaches. The code is available at: https://github.com/IST-DASLab/sparsegpt.

Elias Frantar, Dan Alistarh• 2023

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity5.84
2839
Language ModelingWikiText-2 (test)
PPL8.32
1949
Commonsense ReasoningHellaSwag
Accuracy52.7
1891
Language ModelingWikiText-2
Perplexity (PPL)6.52
1624
Visual Question AnsweringVizWiz
Accuracy65.28
1525
Object Hallucination EvaluationPOPE
Accuracy88.21
1455
Language ModelingC4
Perplexity27.62
1422
Visual Question AnsweringTextVQA--
1285
Commonsense ReasoningWinoGrande
Accuracy50.91
1085
Language ModelingC4
Perplexity8.22
1071
Showing 10 of 123 rows
...

Other info

Code

Follow for update