Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SVD-LLM: Truncation-aware Singular Value Decomposition for Large Language Model Compression

About

The advancements in Large Language Models (LLMs) have been hindered by their substantial sizes, which necessitates LLM compression methods for practical deployment. Singular Value Decomposition (SVD) offers a promising solution for LLM compression. However, state-of-the-art SVD-based LLM compression methods have two key limitations: truncating smaller singular values may lead to higher compression loss, and the lack of update on the compressed weights after SVD truncation. In this work, we propose SVD-LLM, a SVD-based post-training LLM compression method that addresses the limitations of existing methods. SVD-LLM incorporates a truncation-aware data whitening technique to ensure a direct mapping between singular values and compression loss. Moreover, SVD-LLM adopts a parameter update with sequential low-rank approximation to compensate for the accuracy degradation after SVD compression. We evaluate SVD-LLM on 10 datasets and seven models from three different LLM families at three different scales. Our results demonstrate the superiority of SVD-LLM over state-of-the-arts, especially at high model compression ratios. Our code is available at https://github.com/AIoT-MLSys-Lab/SVD-LLM

Xin Wang, Yu Zheng, Zhongwei Wan, Mi Zhang• 2024

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity7.94
1875
Language ModelingWikiText-2 (test)
PPL6.34
1541
Commonsense ReasoningHellaSwag
Accuracy60.4
1460
Language ModelingC4
Perplexity10.8
1182
Mathematical ReasoningGSM8K
Accuracy64
983
Code GenerationHumanEval
Pass@155
850
Language ModelingWikiText-2
Perplexity (PPL)8.82
841
Question AnsweringARC Challenge
Accuracy32.3
749
Language ModelingPTB
Perplexity16.22
650
Mathematical ReasoningMATH
Accuracy1.6
643
Showing 10 of 71 rows
...

Other info

Follow for update