Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

POP: Prefill-Only Pruning for Efficient Large Model Inference

About

Large Language Models (LLMs) and Vision-Language Models (VLMs) have demonstrated remarkable capabilities. However, their deployment is hindered by significant computational costs. Existing structured pruning methods, while hardware-efficient, often suffer from significant accuracy degradation. In this paper, we argue that this failure stems from a stage-agnostic pruning approach that overlooks the asymmetric roles between the prefill and decode stages. By introducing a virtual gate mechanism, our importance analysis reveals that deep layers are critical for next-token prediction (decode) but largely redundant for context encoding (prefill). Leveraging this insight, we propose Prefill-Only Pruning (POP), a stage-aware inference strategy that safely omits deep layers during the computationally intensive prefill stage while retaining the full model for the sensitive decode stage. To enable the transition between stages, we introduce independent Key-Value (KV) projections to maintain cache integrity, and a boundary handling strategy to ensure the accuracy of the first generated token. Extensive experiments on Llama-3.1, Qwen3-VL, and Gemma-3 across diverse modalities demonstrate that POP achieves up to 1.37$\times$ speedup in prefill latency with minimal performance loss, effectively overcoming the accuracy-efficiency trade-off limitations of existing structured pruning methods.

Junhui He, Zhihui Fu, Jun Wang, Qingan Li• 2026

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy81.96
1460
Code GenerationHumanEval--
850
Commonsense ReasoningPIQA
Accuracy80.36
647
Text-based Visual Question AnsweringTextVQA
Accuracy80.73
496
Multimodal UnderstandingMMMU
Accuracy50.67
275
GUI GroundingScreenSpot
Avg Acc86.4
76
Spatial ReasoningRealworldQA
Accuracy69.28
32
Long-context Question AnsweringHotpotQA
Mean Score63.13
21
Commonsense ReasoningWinoG
Accuracy74.59
19
Long-context Question AnsweringMultifieldQA
Accuracy57.33
15
Showing 10 of 11 rows

Other info

Follow for update