Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

The Unreasonable Ineffectiveness of the Deeper Layers

About

How is knowledge stored in an LLM's weights? We study this via layer pruning: if removing a certain layer does not affect model performance in common question-answering benchmarks, then the weights in that layer are not necessary for storing the knowledge needed to answer those questions. To find these unnecessary parameters, we identify the optimal block of layers to prune by considering similarity across layers; then, to "heal" the damage, we perform a small amount of finetuning. Surprisingly, with this method we find minimal degradation of performance until after a large fraction (up to half) of the layers are removed for some common open-weight models. From a scientific perspective, the robustness of these LLMs to the deletion of layers implies either that current pretraining methods are not properly leveraging the parameters in the deeper layers of the network or that the shallow layers play a critical role in storing knowledge. For our study, we use parameter-efficient finetuning (PEFT) methods, specifically quantization and Low Rank Adapters (QLoRA), such that each of our experiments can be performed on a single 40GB A100 GPU.

Andrey Gromov, Kushal Tirumala, Hassan Shapourian, Paolo Glorioso, Daniel A. Roberts• 2024

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy59.7
1460
Question AnsweringARC Challenge
Accuracy44.62
749
Question AnsweringOpenBookQA
Accuracy40.4
465
Language ModelingC4 (val)
PPL8.09
392
Question AnsweringARC Easy
Accuracy68.6
386
Natural Language InferenceRTE
Accuracy70.4
367
Language ModelingWikiText2 v1 (test)
Perplexity7.19
341
Physical Interaction Question AnsweringPIQA
Accuracy69
323
Boolean Question AnsweringBoolQ
Accuracy53.91
307
Language ModelingWikiText2 (val)
Perplexity (PPL)5.67
277
Showing 10 of 35 rows

Other info

Follow for update