Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ROSE: Reordered SparseGPT for More Accurate One-Shot Large Language Models Pruning

About

Pruning is widely recognized as an effective method for reducing the parameters of large language models (LLMs), potentially leading to more efficient deployment and inference. One classic and prominent path of LLM one-shot pruning is to leverage second-order gradients (i.e., Hessian), represented by the pioneering work SparseGPT. However, the predefined left-to-right pruning order in SparseGPT leads to suboptimal performance when the weights exhibit columnar patterns. This paper studies the effect of pruning order under the SparseGPT framework. The analyses lead us to propose ROSE, a reordered SparseGPT method that prioritizes weights with larger potential pruning errors to be pruned earlier. ROSE first performs pre-pruning to identify candidate weights for removal, and estimates both column and block pruning loss. Subsequently, two-level reordering is performed: columns within each block are reordered in descending order of column loss, while blocks are reordered based on block loss. We introduce the relative range of block loss as a metric to identify columnar layers, enabling adaptive reordering across the entire model. Substantial empirical results on prevalent LLMs (LLaMA2-7B/13B/70B, LLaMA3-8B, Mistral-7B) demonstrate that ROSE surpasses the original SparseGPT and other counterpart pruning methods. Our code is available at https://github.com/mingluo-su/ROSE.

Mingluo Su, Huan Wang• 2026

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-2
Perplexity (PPL)6.96
1624
Language ModelingWikiText
PPL9.29
732
Language ModelingWikiText2 v1 (test)
Perplexity8.6
383
ClassificationZero-shot Evaluation Suite (BoolQ, PIQA, HellaSwag, WinoGrande, ARC-e, ARC-c, OBQA)
Average Accuracy (Zero-Shot Suite)69.57
94
Zero-shot Language UnderstandingReasoning Suite Zero-shot (BoolQ, WinoG., PIQA, OBQA, HellaS., ARC-e, ARC-c)
BoolQ Accuracy82.51
24
Zero-shot Question AnsweringBoolQ, Winogrande, PIQA, OpenBookQA, HellaSwag, ARC-Easy, ARC-Challenge Zero-shot
BoolQ Accuracy (Zero-shot)80.18
21
Zero-shot Task AccuracyZero-shot task suite (BoolQ, WinoG., PIQA, OBQA, HellaS., ARC-e, ARC-c) (test)
BoolQ Accuracy82.81
15
Inference LatencyLLaMA2 70B
Latency (ms)1.45e+3
3
Showing 8 of 8 rows

Other info

Follow for update