Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Prompt-based Depth Pruning of Large Language Models

About

Depth pruning aims to reduce the inference cost of a large language model without any hardware-specific complications, by simply removing several less important transformer blocks. However, our empirical findings suggest that the importance of a transformer block may be highly task-dependent -- a block that is crucial for a task can be removed without degrading the accuracy on another task. Based on this observation, we develop a dynamic depth pruning algorithm, coined PuDDing (Prompt-routed Dynamic Depth Pruning), which determines which blocks to omit from the model based on the input prompt. PuDDing operates by training a lightweight router to predict the best omission set among a set of options, where this option set has also been constructed in a data-driven manner. Empirical results on commonsense reasoning benchmarks demonstrate that PuDDing effectively accelerates the inference language models, and achieves better on-task performance than static depth pruning baselines.

Juyun Wee, Minjae Park, Jaeho Lee• 2025

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag--
1891
Commonsense ReasoningWinoGrande--
1085
Natural Language InferenceRTE
Accuracy53.8
448
Question AnsweringARC-E
Accuracy27.3
416
Question AnsweringBoolQ--
317
Question AnsweringARC-C
Accuracy23.7
192
Recognizing Textual EntailmentRTE
Accuracy64.3
47
Natural Language UnderstandingNLP Suite (BoolQ, RTE, HellaSwag, WinoG, ARC-E, ARC-C, OpenBookQA) zero-shot
Average Accuracy48.8
41
Science Question AnsweringARC Easy
Accuracy (Character-level)51.3
20
Science Question AnsweringARC Challenge
Accuracy (ARC)32.1
19
Showing 10 of 15 rows

Other info

Follow for update