Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Low-Rank Compression of Language Models via Differentiable Rank Selection

About

Approaches for compressing large-language models using low-rank decomposition have made strides, particularly with the introduction of activation and loss-aware SVD, which improves the trade-off between decomposition rank and downstream task performance. Despite these advancements, a persistent challenge remains--selecting the optimal ranks for each layer to jointly optimise compression rate and downstream task accuracy. Current methods either rely on heuristics that can yield sub-optimal results due to their limited discrete search space or are gradient-based but are not as performant as heuristic approaches without post-compression fine-tuning. To address these issues, we propose Learning to Low-Rank Compress (LLRC), a gradient-based approach which directly learns the weights of masks that select singular values in a fine-tuning-free setting. Using a calibration dataset, we train only the mask weights to select fewer and fewer singular values while minimising the divergence of intermediate activations from the original model. Our approach outperforms competing ranking selection methods that similarly require no post-compression fine-tuning across various compression rates on common-sense reasoning and open-domain question-answering tasks. For instance, with a compression rate of 20% on Llama-2-13B, LLRC outperforms the competitive Sensitivity-based Truncation Rank Searching (STRS) on MMLU, BoolQ, and OpenbookQA by 12%, 3.5%, and 4.4%, respectively. Compared to other compression techniques, our approach consistently outperforms fine-tuning-free variants of SVD-LLM and LLM-Pruner across datasets and compression rates. Our fine-tuning-free approach also performs competitively with the fine-tuning variant of LLM-Pruner.

Sidhant Sundrani, Francesco Tudisco, Pasquale Minervini• 2025

Related benchmarks

TaskDatasetResultRank
Physical Commonsense ReasoningPIQA
Accuracy78.1
329
Boolean Question AnsweringBoolQ
Accuracy77.3
307
Physical Commonsense ReasoningPIQA (val)
Accuracy78.6
113
Multi-task Language UnderstandingMMLU
Accuracy44.8
87
Multitask Language UnderstandingMMLU (val)
Accuracy49.7
58
Reading ComprehensionBoolQ (val)
Accuracy81.3
34
Question AnsweringNQ-Open (val)
Accuracy26.5
28
Question AnsweringOQA
Accuracy33.4
24
Question AnsweringOpenbookQA (OQA) (val)
Accuracy36
22
Open-domain Question AnsweringNQ-Open
Accuracy22.1
20
Showing 10 of 10 rows

Other info

Follow for update