Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SPQ: An Ensemble Technique for Large Language Model Compression

About

This study presents an ensemble technique, SPQ (SVD-Pruning-Quantization), for large language model (LLM) compression that combines variance-retained singular value decomposition (SVD), activation-based pruning, and post-training linear quantization. Each component targets a different source of inefficiency: i) pruning removes redundant neurons in MLP layers, ii) SVD reduces attention projections into compact low-rank factors, iii) and 8-bit quantization uniformly compresses all linear layers. At matched compression ratios, SPQ outperforms individual methods (SVD-only, pruning-only, or quantization-only) in perplexity, demonstrating the benefit of combining complementary techniques. Applied to LLaMA-2-7B, SPQ achieves up to 75% memory reduction while maintaining or improving perplexity (e.g., WikiText-2 5.47 to 4.91) and preserving accuracy on downstream benchmarks such as C4, TruthfulQA, and GSM8K. Compared to strong baselines like GPTQ and SparseGPT, SPQ offers competitive perplexity and accuracy while using less memory (6.86 GB vs. 7.16 GB for GPTQ). Moreover, SPQ improves inference throughput over GPTQ, achieving up to a 1.9x speedup, which further enhances its practicality for real-world deployment. The effectiveness of SPQ's robust compression through layer-aware and complementary compression techniques may provide practical deployment of LLMs in memory-constrained environments. Code is available at: https://github.com/JiaminYao/SPQ_LLM_Compression/

Jiamin Yao, Eren Gultepe• 2026

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-2
Perplexity (PPL)4.91
1624
Language ModelingC4
Perplexity7.11
1422
ReasoningARC Easy
Accuracy72
187
ReasoningHellaSwag (HS)
HellaSwag Accuracy51
162
ReasoningPIQA--
145
ReasoningWinoGrande (WG)
Accuracy68
135
ReasoningOpenBookQA
Accuracy30
77
ReasoningGSM8K
Exact Match Flexible-Extract Acc5
5
ReasoningTruthfulQA 1
BLEU24
5
ReasoningTruthfulQA 2
BLEU0.38
5
Showing 10 of 10 rows

Other info

Follow for update