Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Wanda++: Pruning Large Language Models via Regional Gradients

About

Large Language Models (LLMs) pruning seeks to remove unimportant weights for inference speedup with minimal accuracy impact. However, existing methods often suffer from accuracy degradation without full-model sparsity-aware fine-tuning. This paper presents Wanda++, a novel pruning framework that outperforms the state-of-the-art methods by utilizing decoder-block-level \textbf{regional} gradients. Specifically, Wanda++ improves the pruning score with regional gradients for the first time and proposes an efficient regional optimization method to minimize pruning-induced output discrepancies between the dense and sparse decoder output. Notably, Wanda++ improves perplexity by up to 32\% over Wanda in the language modeling task and generalizes effectively to downstream tasks. Moreover, despite updating weights with regional optimization, Wanda++ remains orthogonal to sparsity-aware fine-tuning, further reducing perplexity with LoRA in great extend. Our approach is lightweight, pruning a 7B LLaMA model in under 10 minutes on a single H100 GPU.

Yifan Yang, Kai Zhen, Bhavana Ganesh, Aram Galstyan, Goeric Huybrechts, Markus M\"uller, Jonas M. K\"ubler, Rupak Vignesh Swaminathan, Athanasios Mouchtaris, Sravan Babu Bodapati, Nathan Susanj, Zheng Zhang, Jack FitzGerald, Abhishek Kumar• 2025

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity5.8
2839
Commonsense ReasoningHellaSwag
Accuracy54.52
1891
Language ModelingC4
Perplexity8.07
1071
Language ModelingPTB
Perplexity22.01
1034
Question AnsweringARC Challenge--
906
Question AnsweringARC Easy
Accuracy72.56
597
Question AnsweringPIQA
Accuracy76.88
374
Reading ComprehensionBoolQ
Accuracy71.41
279
Commonsense ReasoningWinoGrande
Accuracy68.27
68
Showing 9 of 9 rows

Other info

Follow for update