Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

The Unseen Frontier: Pushing the Limits of LLM Sparsity with Surrogate-Free ADMM

About

Neural network pruning is a promising technique to mitigate the excessive computational and memory requirements of large language models (LLMs). Despite its promise, however, progress in this area has diminished, as conventional methods are seemingly unable to surpass moderate sparsity levels (50-60%) without severely degrading model accuracy. This work breaks through the current impasse, presenting a principled and effective method called $\texttt{Elsa}$, which achieves extreme sparsity levels of up to 90% while retaining high model fidelity. This is done by identifying several limitations in current practice, all of which can be traced back to their reliance on a surrogate objective formulation. $\texttt{Elsa}$ tackles this issue directly and effectively via standard and well-established constrained optimization techniques based on ADMM. Our extensive experiments across a wide range of models and scales show that $\texttt{Elsa}$ achieves substantial improvements over existing methods; e.g., it achieves 7.8$\times$ less perplexity than the best existing method on LLaMA-2-7B at 90% sparsity. Moreover, we show that $\texttt{Elsa}$ remains stable even at extreme sparsity (e.g., 95\%), yielding up to $\times$3.98 inference speedup and $\times$7.80 memory compression over its dense counterpart. We also present $\texttt{Elsa}_{-L}$, a quantized variant that scales to extremely large models (27B), and establish its theoretical convergence guarantees.These results highlight meaningful progress in advancing the frontier of LLM sparsity, while promising that significant opportunities for further advancement may remain in directions that have so far attracted limited exploration.

Kwanhee Lee, Hyeondo Jang, Dongyeop Lee, Dan Alistarh, Namhoon Lee• 2025

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy53.12
1460
Question AnsweringARC Challenge
Accuracy39.42
749
Language ModelingWikiText
PPL26.97
479
Question AnsweringARC Easy
Accuracy71.3
386
Natural Language InferenceRTE
Accuracy58.48
367
Language ModelingC4
Perplexity8.78
321
Language ModelingWiki
Perplexity (PPL)6.54
251
Question AnsweringBoolQ
Accuracy73.03
240
Question AnsweringOpenBookQA
Accuracy29.4
84
Zero-shot AccuracyARC Easy
Zero-shot Acc (ARC Easy)64.69
63
Showing 10 of 24 rows

Other info

Follow for update