Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Streamlining Redundant Layers to Compress Large Language Models

About

This paper introduces LLM-Streamline, a pioneer work on layer pruning for large language models (LLMs). It is based on the observation that different layers have varying impacts on hidden states, enabling the identification of less important layers to be pruned.LLM-Streamline comprises two parts: layer pruning, which removes consecutive layers with the lowest importance based on target sparsity, and layer replacement, a novel module that trains a lightweight network to replace the pruned layers to mitigate performance loss. Additionally, a new metric called stability is proposed to address the limitations of the widely used accuracy metric in evaluating model compression. Experiments show that LLM-Streamline outperforms both previous and concurrent state-of-the-art pruning methods in terms of both performance and training efficiency.Our code is available at https://github.com/RUCKBReasoning/LLM-Streamline

Xiaodong Chen, Yuxuan Hu, Jing Zhang, Yanling Wang, Cuiping Li, Hong Chen• 2024

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy61.2
1460
Physical Commonsense ReasoningPIQA
Accuracy72
329
Physical Interaction Question AnsweringPIQA
Accuracy71.5
323
Boolean Question AnsweringBoolQ
Accuracy67.5
307
Reading ComprehensionRACE high
Accuracy38.7
295
Multitask Language UnderstandingMMLU
Accuracy45.5
206
Reading ComprehensionRACE mid
Accuracy38
196
Coreference ResolutionWSC
Accuracy43.3
96
Multi-task Language UnderstandingMMLU
Accuracy47
87
Reading ComprehensionC3
Accuracy43.3
56
Showing 10 of 22 rows

Other info

Follow for update