Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

POP: Online Structural Pruning Enables Efficient Inference of Large Foundation Models

About

Large foundation models (LFMs) achieve strong performance through scaling, yet current structural pruning methods derive fixed pruning decisions during inference, overlooking sparsity patterns that emerge in the autoregressive token generation. In this paper, we propose POP (Partition-guided Online Pruning), an efficient online structural pruning framework that enables context-conditioned dynamic pruning with minimal computational overhead. POP partitions model channels into retained, candidate, and pruned regions, where prefilling defines a coarse pruning partition, and the decoding stage generates a fine-grained mask within the candidate region, avoiding full-channel re-evaluation. The coarse pruning partition preserves consistently important weights, while the fine-grained masking provides context-conditioned variation during decoding. Moreover, POP is a lightweight, plug-and-play method that requires no preprocessing, including offline calibration, retraining, or learning predictors. Extensive evaluations across diverse LFMs, including large language models (LLMs), mixture-of-experts models (MoEs), and vision-language models (VLMs), demonstrate that POP consistently delivers higher accuracy than existing pruning approaches while incurring smaller computational overhead and minimizing inference latency.

Yi Chen, Wonjin Shin, Shuhong Liu, Tho Mai, Jeongmo Lee, Chuanbo Hua, Kun Wang, Jun Liu, Joo-Young Kim• 2026

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText
PPL3.92
479
Question Answering7 QA tasks
Accuracy64.64
42
Text Generation5 Generation tasks
Accuracy57.96
36
Instance SegmentationLVIS v1 (val)
AP (m, r)34.14
34
Instance SegmentationMS-COCO 2014 (val)
AP^m52.08
33
Visual Question AnsweringVQA 5 tasks
Accuracy (%)62.14
14
Showing 6 of 6 rows

Other info

Follow for update