Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SAES-SVD: Self-Adaptive Suppression of Accumulated and Local Errors for SVD-based LLM Compression

About

The rapid growth in the parameter scale of large language models (LLMs) has created a high demand for efficient compression techniques. As a hardware-agnostic and highly compatible technique, low-rank compression has been widely adopted. However, existing methods typically compress each layer independently by minimizing per-layer reconstruction error, overlooking a critical limitation: the reconstruction error propagates and accumulates through the network, which leads to amplified global deviations from the full-precision baseline. To address this, we propose Self-Adaptive Error Suppression SVD (SAES-SVD), a LLMs compression framework that jointly optimizes intra-layer reconstruction and inter-layer error compensation. SAES-SVD is composed of two novel components: (1) Cumulative Error-Aware Layer Compression (CEALC), which formulates the compression objective as a combination of local reconstruction and weighted cumulative error compensation. Based on it, we derive a closed-form low-rank solution relied on second-order activation statistics, which explicitly aligns each layer's output with its full-precision counterpart to compensate for accumulated errors. (2) Adaptive Collaborative Error Suppression (ACES), which automatically adjusts the weighting coefficient to enhance the low-rank structure of the compression objective in CEALC. Specifically, the coefficient is optimized to maximize the ratio between the Frobenius norm of the compressed layer's output and that of the compression objective under a fixed rank, thus ensuring that the rank budget is utilized effectively. Extensive experiments across multiple LLM architectures and tasks show that, without fine-tuning or mixed-rank strategies, SAES-SVD consistently improves post-compression performance.

Xing Hu, Dawei Yang, Yuan Cheng, Zhixuan Chen, Zukang Xu• 2026

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity7.17
1875
Language ModelingWikiText-2 (test)
PPL7.17
1541
Language ModelingC4
Perplexity13.77
1182
Mathematical ReasoningGSM8K
Accuracy69
983
Code GenerationHumanEval
Pass@163
850
Language ModelingPTB
Perplexity15.16
650
Question AnsweringARC Challenge (test)
Accuracy38.8
63
Multiple-choice Question AnsweringARC Easy (test)
Accuracy71.2
50
Commonsense ReasoningPIQA (test)
Accuracy73.4
46
Common Sense ReasoningHELLASWAG (test)
Accuracy47.7
45
Showing 10 of 17 rows

Other info

Follow for update