Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LESA: Learnable LLM Layer Scaling-Up

About

Training Large Language Models (LLMs) from scratch requires immense computational resources, making it prohibitively expensive. Model scaling-up offers a promising solution by leveraging the parameters of smaller models to create larger ones. However, existing depth scaling-up methods rely on empirical heuristic rules for layer duplication, which result in poorer initialization and slower convergence during continual pre-training. We propose \textbf{LESA}, a novel learnable method for depth scaling-up. By concatenating parameters from each layer and applying Singular Value Decomposition, we uncover latent patterns between layers, suggesting that inter-layer parameters can be learned. LESA uses a neural network to predict the parameters inserted between adjacent layers, enabling better initialization and faster training. Experiments show that LESA outperforms existing baselines, achieving superior performance with less than half the computational cost during continual pre-training. Extensive analyses demonstrate its effectiveness across different model sizes and tasks.

Yifei Yang, Zouying Cao, Xinbei Ma, Yao Yao, Libo Qin, Zhi Chen, Hai Zhao• 2025

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy32.09
1460
Code GenerationHumanEval
Pass@125
850
Multi-task Language UnderstandingMMLU
Accuracy36.43
842
Language ModelingWikiText-103 (test)
Perplexity7.72
524
Boolean Question AnsweringBoolQ
Accuracy66.33
307
Question AnsweringARC-E
Accuracy42.86
242
Question AnsweringBoolQ
Accuracy70.46
240
Commonsense ReasoningWinoGrande
Accuracy60.38
231
Question AnsweringTriviaQA
Accuracy67.15
210
Question AnsweringARC-C
Accuracy32.54
166
Showing 10 of 20 rows

Other info

Code

Follow for update