Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

One-for-All Pruning: A Universal Model for Customized Compression of Large Language Models

About

Existing pruning methods for large language models (LLMs) focus on achieving high compression rates while maintaining model performance. Although these methods have demonstrated satisfactory performance in handling a single user's compression request, their processing time increases linearly with the number of requests, making them inefficient for real-world scenarios with multiple simultaneous requests. To address this limitation, we propose a Univeral Model for Customized Compression (UniCuCo) for LLMs, which introduces a StratNet that learns to map arbitrary requests to their optimal pruning strategy. The challenge in training StratNet lies in the high computational cost of evaluating pruning strategies and the non-differentiable nature of the pruning process, which hinders gradient backpropagation for StratNet updates. To overcome these challenges, we leverage a Gaussian process to approximate the evaluation process. Since the gradient of the Gaussian process is computable, we can use it to approximate the gradient of the non-differentiable pruning process, thereby enabling StratNet updates. Experimental results show that UniCuCo is 28 times faster than baselines in processing 64 requests, while maintaining comparable accuracy to baselines.

Rongguang Ye, Ming Tang• 2025

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-2 (test)
PPL5.65
1541
Language ModelingC4
Perplexity12.96
1182
Language ModelingWikiText-2
Perplexity (PPL)7.92
841
Language ModelingC4 (val)
PPL7.89
392
Language ModelingWikiText2 v1 (test)
Perplexity5.93
341
Language ModelingWikiText2 (val)
Perplexity (PPL)5.53
277
Zero-shot ReasoningReasoning Suite Zero-shot (PIQA, HellaSwag, WinoGrande, ARC-e, ARC-c) (val test)
PIQA76.28
119
Zero-shot Common Sense ReasoningZero-shot Suite (PIQA, HellaSwag, WinoGrande, ARC-e, ARC-c) (test)
PIQA77.91
95
Zero-shot EvaluationArcC, ArcE, HS, PiQA, WG (test val)
Average Accuracy65.23
61
Language ModelingFW (val)
PPL6.94
26
Showing 10 of 10 rows

Other info

Follow for update