Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ASVD: Activation-aware Singular Value Decomposition for Compressing Large Language Models

About

In this paper, we introduce a new post-training compression paradigm for Large Language Models (LLMs) to facilitate their wider adoption. We delve into LLM weight low-rank decomposition, and find that the challenges of this task stem from (1) the distribution variance in the LLM activations and (2) the sensitivity difference among various kinds of layers. To address these issues, we propose a training-free approach called Activation-aware Singular Value Decomposition (ASVD). Specifically, ASVD manages activation outliers by transforming the weight matrix based on the activation distribution. This transformation allows the outliers in the activation matrix to be absorbed into the transformed weight matrix, thereby enhancing decomposition accuracy. Additionally, we propose an efficient iterative calibration process to optimize layer-specific decomposition by addressing the varying sensitivity of different LLM layers. In this way, ASVD can compress a network by 10%-30%. Based on the success of the low-rank decomposition of projection matrices in the self-attention module, we further introduce ASVD to compress the KV cache. By reducing the channel dimension of KV activations, memory requirements for KV cache can be largely reduced. ASVD can further achieve 50% KV cache reductions without performance drop in a training-free manner.

Zhihang Yuan, Yuzhang Shang, Yue Song, Dawei Yang, Qiang Wu, Yan Yan, Guangyu Sun• 2023

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity6.92
2839
Language ModelingWikiText-2 (test)
PPL6.74
1949
Language ModelingWikiText-2
Perplexity (PPL)6.54
1624
Language ModelingC4
Perplexity7.66
1422
Mathematical ReasoningGSM8K
Accuracy44
1362
Language ModelingC4
Perplexity15.93
1071
Code GenerationHumanEval
Pass@141
1036
Language ModelingPTB
Perplexity16.55
1034
Language ModelingWikiText2 v1 (test)
Perplexity12.02
383
Multimodal UnderstandingSEED-Bench
Accuracy70.88
343
Showing 10 of 60 rows

Other info

Follow for update