Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

OATS: Outlier-Aware Pruning Through Sparse and Low Rank Decomposition

About

The recent paradigm shift to large-scale foundation models has brought about a new era for deep learning that, while has found great success in practice, has also been plagued by prohibitively expensive costs in terms of high memory consumption and compute. To mitigate these issues, there has been a concerted effort in post-hoc neural network pruning techniques that do not require costly retraining. Despite the considerable progress being made, existing methods often exhibit a steady drop in model performance as the compression increases. In this paper, we present a novel approach to compressing large transformers, coined OATS, that utilizes the second moment information in the input embeddings to decompose the model weights into a sum of sparse and low-rank matrices. Without any retraining, OATS achieves state-of-the-art performance when compressing models by up to $60\%$ on large language models such as Llama-3 and Phi-3 and vision transformers such as ViT and DINOv2 while delivering up to $1.37\times$ the CPU acceleration versus a model that was comparably pruned.

Stephen Zhang, Vardan Papyan• 2024

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy54.25
1891
Language ModelingWikiText-2
Perplexity (PPL)8.67
1624
Language ModelingC4
Perplexity13.09
1422
Commonsense ReasoningWinoGrande
Accuracy58.8
1085
Language ModelingC4
Perplexity11.75
1071
Language ModelingPTB
Perplexity21.67
1034
Question AnsweringARC Challenge
Accuracy32.51
906
Language ModelingWikiText
PPL9.97
732
Question AnsweringARC Easy
Accuracy56.36
597
Natural Language InferenceRTE
Accuracy55.23
448
Showing 10 of 22 rows

Other info

Follow for update