Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Swift-SVD: Theoretical Optimality Meets Practical Efficiency in Low-Rank LLM Compression

About

The deployment of Large Language Models is constrained by the memory and bandwidth demands of static weights and dynamic Key-Value cache. SVD-based compression provides a hardware-friendly solution to reduce these costs. However, existing methods suffer from two key limitations: some are suboptimal in reconstruction error, while others are theoretically optimal but practically inefficient. In this paper, we propose Swift-SVD, an activation-aware, closed-form compression framework that simultaneously guarantees theoretical optimum, practical efficiency and numerical stability. Swift-SVD incrementally aggregates covariance of output activations given a batch of inputs and performs a single eigenvalue decomposition after aggregation, enabling training-free, fast, and optimal layer-wise low-rank approximation. We employ effective rank to analyze local layer-wise compressibility and design a dynamic rank allocation strategy that jointly accounts for local reconstruction loss and end-to-end layer importance. Extensive experiments across six LLMs and eight datasets demonstrate that Swift-SVD outperforms state-of-the-art baselines, achieving optimal compression accuracy while delivering 3-70X speedups in end-to-end compression time. Our code will be released upon acceptance.

Ruoling Qi, Yirui Liu, Xuaner Wu, Xiangyu Wang, Ming Li, Chen Chen, Jian Chen, Yin Chen, Qizhen Weng• 2026

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-2
Perplexity (PPL)6.63
1624
Language ModelingC4
Perplexity11.08
1071
Zero-shot ReasoningPIQA
PIQA Zero-shot Accuracy73
62
Zero-shot ReasoningWinoGrande
Accuracy68
54
Zero-shot ReasoningHellaSwag
Accuracy48
48
Zero-shot ReasoningARC-Easy zero-shot
Zero-shot Accuracy65
41
Zero-shot ReasoningMathQA
Accuracy23
26
Zero-shot ReasoningOpenBookQA
Accuracy27
26
Zero-shot ReasoningARC-e, PIQA, OpenbookQA, Winogrande, HellaSwag, MathQA
Average Accuracy51
19
Common Sense ReasoningSix common sense reasoning benchmarks (ARC-e, PIQA, OpenbookQA, Winogrande, HellaSwag, MathQA)
Average Accuracy56
15
Showing 10 of 11 rows

Other info

Follow for update