Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Streamlining Redundant Layers to Compress Large Language Models

About

This paper introduces LLM-Streamline, a pioneer work on layer pruning for large language models (LLMs). It is based on the observation that different layers have varying impacts on hidden states, enabling the identification of less important layers to be pruned.LLM-Streamline comprises two parts: layer pruning, which removes consecutive layers with the lowest importance based on target sparsity, and layer replacement, a novel module that trains a lightweight network to replace the pruned layers to mitigate performance loss. Additionally, a new metric called stability is proposed to address the limitations of the widely used accuracy metric in evaluating model compression. Experiments show that LLM-Streamline outperforms both previous and concurrent state-of-the-art pruning methods in terms of both performance and training efficiency.Our code is available at https://github.com/RUCKBReasoning/LLM-Streamline

Xiaodong Chen, Yuxuan Hu, Jing Zhang, Yanling Wang, Cuiping Li, Hong Chen• 2024

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy61.2
1891
Physical Commonsense ReasoningPIQA
Accuracy72
572
Multitask Language UnderstandingMMLU
Accuracy45.5
413
Physical Interaction Question AnsweringPIQA
Accuracy71.5
333
Boolean Question AnsweringBoolQ
Accuracy67.5
323
Multi-task Language UnderstandingMMLU
Accuracy47
321
Reading ComprehensionRACE high
Accuracy38.7
295
Reading ComprehensionRACE mid
Accuracy38
196
Coreference ResolutionWSC
Accuracy43.3
99
Reading ComprehensionC3
Accuracy43.3
73
Showing 10 of 22 rows

Other info

Follow for update