Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

POP: Prefill-Only Pruning for Efficient Large Model Inference

About

Large Language Models (LLMs) and Vision-Language Models (VLMs) have demonstrated remarkable capabilities. However, their deployment is hindered by significant computational costs. Existing structured pruning methods, while hardware-efficient, often suffer from significant accuracy degradation. In this paper, we argue that this failure stems from a stage-agnostic pruning approach that overlooks the asymmetric roles between the prefill and decode stages. By introducing a virtual gate mechanism, our importance analysis reveals that deep layers are critical for next-token prediction (decode) but largely redundant for context encoding (prefill). Leveraging this insight, we propose Prefill-Only Pruning (POP), a stage-aware inference strategy that safely omits deep layers during the computationally intensive prefill stage while retaining the full model for the sensitive decode stage. To enable the transition between stages, we introduce independent Key-Value (KV) projections to maintain cache integrity, and a boundary handling strategy to ensure the accuracy of the first generated token. Extensive experiments on Llama-3.1, Qwen3-VL, and Gemma-3 across diverse modalities demonstrate that POP achieves up to 1.37$\times$ speedup in prefill latency with minimal performance loss, effectively overcoming the accuracy-efficiency trade-off limitations of existing structured pruning methods.

Junhui He, Zhihui Fu, Jun Wang, Qingan Li• 2026

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy81.96
1891
Code GenerationHumanEval--
1036
Text-based Visual Question AnsweringTextVQA
Accuracy80.73
807
Commonsense ReasoningPIQA
Accuracy80.36
751
Multimodal UnderstandingMMMU
Accuracy50.67
437
GUI GroundingScreenSpot
Avg Acc86.4
133
Commonsense ReasoningWinoG
Accuracy74.59
48
Spatial ReasoningRealworldQA
Accuracy69.28
45
Multi-discipline UnderstandingMMLU
Accuracy75.05
33
Long-context Question AnsweringHotpotQA
Mean Score63.13
21
Showing 10 of 11 rows

Other info

Follow for update