Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Not All Layers of LLMs Are Necessary During Inference

About

Due to the large number of parameters, the inference phase of Large Language Models (LLMs) is resource-intensive. However, not all requests posed to LLMs are equally difficult to handle. Through analysis, we show that for some tasks, LLMs can achieve results comparable to the final output at some intermediate layers. That is, not all layers of LLMs are necessary during inference. If we can predict at which layer the inferred results match the final results (produced by evaluating all layers), we could significantly reduce the inference cost. To this end, we propose a simple yet effective algorithm named AdaInfer to adaptively terminate the inference process for an input instance. AdaInfer relies on easily obtainable statistical features and classic classifiers like SVM. Experiments on well-known LLMs like the Llama2 series and OPT, show that AdaInfer can achieve an average of 17.8% pruning ratio, and up to 43% on sentiment tasks, with nearly no performance drop (<1%). Because AdaInfer does not alter LLM parameters, the LLMs incorporated with AdaInfer maintain generalizability across tasks.

Siqi Fan, Xin Jiang, Xiang Li, Xuying Meng, Peng Han, Shuo Shang, Aixin Sun, Yequan Wang, Zhongyuan Wang• 2024

Related benchmarks

TaskDatasetResultRank
Subjectivity ClassificationSubj
Accuracy50.85
329
Question ClassificationTREC
Accuracy26
259
Sentiment AnalysisMR
Accuracy0.535
160
Sentiment AnalysisCR
Accuracy50
141
Sentiment AnalysisSST-5
Accuracy28.14
106
Sentiment ClassificationMPQA
Accuracy60.9
35
Showing 6 of 6 rows

Other info

Follow for update