Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Dobi-SVD: Differentiable SVD for LLM Compression and Some New Perspectives

About

We provide a new LLM-compression solution via SVD, unlocking new possibilities for LLM compression beyond quantization and pruning. We point out that the optimal use of SVD lies in truncating activations, rather than merely using activations as an optimization distance. Building on this principle, we address three critical challenges in SVD-based LLM compression: including (1) How can we determine the optimal activation truncation position for each weight matrix in LLMs? (2) How can we efficiently reconstruct the weight matrices based on truncated activations? (3) How can we address the inherent "injection" nature that results in the information loss of the SVD? We propose Dobi-SVD, which establishes a new, principled approach to SVD-based LLM compression.

Qinsi Wang, Jinghan Ke, Masayoshi Tomizuka, Yiran Chen, Kurt Keutzer, Chenfeng Xu• 2025

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity8.54
1875
Commonsense ReasoningHellaSwag
Accuracy52
1460
Language ModelingC4
Perplexity10.01
1182
Language ModelingWikiText-2
Perplexity (PPL)9.39
841
Commonsense ReasoningWinoGrande
Accuracy72
776
Question AnsweringARC Challenge
Accuracy39
749
Language ModelingPTB
Perplexity14.83
650
Commonsense ReasoningPIQA
Accuracy76
647
Question AnsweringARC Easy
Accuracy73
386
Zero-shot ReasoningPIQA
PIQA Zero-shot Accuracy65.2
31
Showing 10 of 19 rows

Other info

Follow for update