Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DipSVD: Dual-importance Protected SVD for Efficient LLM Compression

About

The ever-increasing computational demands and deployment costs of large language models (LLMs) have spurred numerous compressing methods. Compared to quantization and unstructured pruning, SVD compression offers superior hardware compatibility and theoretical guarantees. However, existing SVD-based methods focus on the overall discrepancy between the original and compressed matrices while overlooking the protection of critical components within the matrix, which leads to inferior performance in the compressed models. This paper proposes a dual-level importance protection mechanism to enhance SVD-based compression methods: (1) local importance protection: preserving the most critical singular vectors within each weight matrix through channel-weighted data whitening; and (2) global importance protection: enabling less important layers to bear a greater portion of the compression burden through either a heuristic or optimization-based approach, thereby minimizing the impact of compression on critical layers. Extensive experiments demonstrate that DipSVD outperforms existing SVD-based compression approaches across multiple benchmarks, achieving superior model performance especially at high model compression ratios.

Xuan Ding, Rui Sun, Yunjian Zhang, Xiu Yan, Yueqi Zhou, Kaihao Huang, Suzhong Fu, Chuanlong Xie, Yao Zhu• 2025

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity7.95
1875
Language ModelingC4
Perplexity14.07
1182
Language ModelingPTB
Perplexity15.6
650
Question AnsweringARC Challenge (test)
Accuracy36.9
63
Multiple-choice Question AnsweringARC Easy (test)
Accuracy68.1
50
Commonsense ReasoningPIQA (test)
Accuracy73.4
46
Common Sense ReasoningHELLASWAG (test)
Accuracy49
45
Commonsense ReasoningWinoGrande standard (test)
Accuracy69.2
35
Mathematical ReasoningMathQA (test)
Accuracy25.8
33
Zero-shot ReasoningEvaluation Suite Zero-shot (OpenbookQA, ARC-e, ARC-c, WinoGrande, HellaSwag, PIQA, MathQA)
Average Accuracy47
24
Showing 10 of 10 rows

Other info

Follow for update