Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Progressive Depth Up-scaling via Optimal Transport

About

Scaling Large Language Models (LLMs) yields performance gains but incurs substantial training costs. Depth up-scaling offers training efficiency by adding new layers to pre-trained models. However, most existing methods copy or average weights from base layers, neglecting neuron permutation differences. This limitation can potentially cause misalignment that harms performance. Inspired by applying Optimal Transport (OT) for neuron alignment, we propose Optimal Transport Depth Up-Scaling (OpT-DeUS). OpT-DeUS aligns and fuses Transformer blocks in adjacent base layers via OT for new layer creation, to mitigate neuron permutation mismatch between layers. OpT-DeUS achieves better overall performance and offers improved training efficiency than existing methods for continual pre-training and supervised fine-tuning across different model sizes. To further evaluate the impact of interpolation positions, our extensive analysis shows that inserting new layers closer to the top results in higher training efficiency due to shorter back-propagation time while obtaining additional performance gains.

Mingzi Cao, Xi Wang, Nikolaos Aletras• 2025

Related benchmarks

TaskDatasetResultRank
Multi-task Language UnderstandingMMLU
Accuracy37.64
842
Language ModelingWikiText-103 (test)
Perplexity7.73
524
Boolean Question AnsweringBoolQ
Accuracy65.63
307
Commonsense ReasoningWinoGrande
Accuracy61.56
231
Question AnsweringARC
Accuracy67.09
154
Logical reasoningLogiQA
Accuracy23.81
84
Physical ReasoningPIQA
Accuracy75.52
44
Commonsense Question AnsweringCSQA
Accuracy49.55
44
Zero-shot Question Answering and ReasoningEvaluation Suite Zero-shot (ARC, LogiQA, Wino, CSQA, BoolQ, PIQA, MMLU)
ARC83.88
21
Language ModelingWikipedia
Perplexity11.72
14
Showing 10 of 12 rows

Other info

Follow for update