GPrune-LLM: Generalization-Aware Structured Pruning for Large Language Models
About
Structured pruning is widely used to compress large language models (LLMs), yet its effectiveness depends heavily on neuron importance estimation. Most existing methods estimate neuron importance from activation statistics on a single calibration dataset, which introduces calibration bias and degrades downstream cross-task generalization. We observe that neurons exhibit heterogeneous distribution sensitivity, with distribution-robust neurons maintaining consistent rankings across datasets and distribution-sensitive neurons showing high cross-dataset ranking variance. Based on this, we identify two structural limitations in existing methods. First, ranking all neurons within a shared space causes distribution-sensitive neurons that strongly activate on calibration inputs to dominate, crowding out distribution-robust neurons critical for out-of-distribution tasks. Second, applying activation-based importance metrics uniformly can be unreliable. Distribution-sensitive neurons that infrequently activate on calibration data receive insufficient activation signal for accurate local ranking. To address these limitations, we propose GPrune-LLM, a generalization-aware structured pruning framework that explicitly accounts for neuron differences in cross-distribution behavior. We first partition neurons into behavior-consistent modules to localize ranking competition, then evaluate activation-based metric reliability per module according to distribution sensitivity and score magnitude. For modules where activation-based scoring is unreliable, we switch to an activation-independent metric. Finally, we adaptively learn module-wise sparsity. Extensive experiments across multiple downstream tasks demonstrate GPrune-LLM's consistent improvements in post-compression generalization, particularly at high sparsity, and reduced dependence on importance metric choice.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Language Modeling | WikiText2 | Perplexity14.94 | 2839 | |
| Language Modeling | PTB | Perplexity59.93 | 1034 | |
| Multimodal Understanding | MMBench | -- | 637 | |
| Visual Question Answering | ScienceQA | Accuracy63.33 | 370 | |
| Multimodal Understanding | MMMU | MMMU Score35.67 | 69 | |
| Zero-shot Common Sense Reasoning | Commonsense Reasoning Benchmarks (BoolQ, PIQA, HellaSwag, WinoGrande, ARC-e, ARC-c, OBQA) zero-shot | Avg Accuracy41.86 | 63 | |
| Zero-shot Reasoning | Reasoning Tasks (BoolQ, PIQA, HellaSwag, WinoGrande, ARC-e, ARC-c, OBQA) Zero-shot | BoolQ Accuracy (Zero-shot)69.51 | 55 | |
| Zero-shot Commonsense Reasoning | Commonsense Reasoning Suite | BoolQ Accuracy70.98 | 32 | |
| Multimodal Reasoning | IE-reasoning | IE-reasoning Score310.4 | 9 | |
| Multimodal Perception | IE-percept | IE-percept Score1.42e+3 | 9 |