Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks

About

Large language models (LLMs) have proven to be highly effective across various natural language processing tasks. However, their large number of parameters poses significant challenges for practical deployment. Pruning, a technique aimed at reducing the size and complexity of LLMs, offers a potential solution by removing redundant components from the network. Despite the promise of pruning, existing methods often struggle to achieve substantial end-to-end LLM inference speedup. In this paper, we introduce SLEB, a novel approach designed to streamline LLMs by eliminating redundant transformer blocks. We choose the transformer block as the fundamental unit for pruning, because LLMs exhibit block-level redundancy with high similarity between the outputs of neighboring blocks. This choice allows us to effectively enhance the processing speed of LLMs. Our experimental results demonstrate that SLEB outperforms previous LLM pruning methods in accelerating LLM inference while also maintaining superior perplexity and accuracy, making SLEB as a promising technique for enhancing the efficiency of LLMs. The code is available at: https://github.com/jiwonsong-dev/SLEB.

Jiwon Song, Kyungseok Oh, Taesu Kim, Hyungjun Kim, Yulhwa Kim, Jae-Joon Kim• 2024

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity14.2428
1875
Language ModelingWikiText-2 (test)
PPL5.85
1541
Language ModelingC4
Perplexity12.9682
1182
Language ModelingPTB
Perplexity52.9183
650
Language ModelingWikiText2 v1 (test)
Perplexity4.88
341
Zero-shot ReasoningReasoning Suite Zero-shot (PIQA, HellaSwag, WinoGrande, ARC-e, ARC-c) (val test)
PIQA78.18
119
Zero-shot Common Sense ReasoningZero-shot Suite (PIQA, HellaSwag, WinoGrande, ARC-e, ARC-c) (test)
PIQA80.14
95
Zero-shot EvaluationEight datasets average
Accuracy61.09
87
Instruction FollowingAlpaca--
63
ClassificationZero-shot Evaluation Suite (BoolQ, PIQA, HellaSwag, WinoGrande, ARC-e, ARC-c, OBQA)
Average Accuracy (Zero-Shot Suite)62.79
59
Showing 10 of 19 rows

Other info

Follow for update