Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

GPrune-LLM: Generalization-Aware Structured Pruning for Large Language Models

About

Structured pruning is widely used to compress large language models (LLMs), yet its effectiveness depends heavily on neuron importance estimation. Most existing methods estimate neuron importance from activation statistics on a single calibration dataset, which introduces calibration bias and degrades downstream cross-task generalization. We observe that neurons exhibit heterogeneous distribution sensitivity, with distribution-robust neurons maintaining consistent rankings across datasets and distribution-sensitive neurons showing high cross-dataset ranking variance. Based on this, we identify two structural limitations in existing methods. First, ranking all neurons within a shared space causes distribution-sensitive neurons that strongly activate on calibration inputs to dominate, crowding out distribution-robust neurons critical for out-of-distribution tasks. Second, applying activation-based importance metrics uniformly can be unreliable. Distribution-sensitive neurons that infrequently activate on calibration data receive insufficient activation signal for accurate local ranking. To address these limitations, we propose GPrune-LLM, a generalization-aware structured pruning framework that explicitly accounts for neuron differences in cross-distribution behavior. We first partition neurons into behavior-consistent modules to localize ranking competition, then evaluate activation-based metric reliability per module according to distribution sensitivity and score magnitude. For modules where activation-based scoring is unreliable, we switch to an activation-independent metric. Finally, we adaptively learn module-wise sparsity. Extensive experiments across multiple downstream tasks demonstrate GPrune-LLM's consistent improvements in post-compression generalization, particularly at high sparsity, and reduced dependence on importance metric choice.

Xiaoyun Liu, Divya Saxena, Jiannong Cao, Yuqing Zhao, Yiying Dong, Penghui Ruan• 2026

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity14.94
2839
Language ModelingPTB
Perplexity59.93
1034
Multimodal UnderstandingMMBench--
637
Visual Question AnsweringScienceQA
Accuracy63.33
370
Multimodal UnderstandingMMMU
MMMU Score35.67
69
Zero-shot Common Sense ReasoningCommonsense Reasoning Benchmarks (BoolQ, PIQA, HellaSwag, WinoGrande, ARC-e, ARC-c, OBQA) zero-shot
Avg Accuracy41.86
63
Zero-shot ReasoningReasoning Tasks (BoolQ, PIQA, HellaSwag, WinoGrande, ARC-e, ARC-c, OBQA) Zero-shot
BoolQ Accuracy (Zero-shot)69.51
55
Zero-shot Commonsense ReasoningCommonsense Reasoning Suite
BoolQ Accuracy70.98
32
Multimodal ReasoningIE-reasoning
IE-reasoning Score310.4
9
Multimodal PerceptionIE-percept
IE-percept Score1.42e+3
9
Showing 10 of 11 rows

Other info

Follow for update