Progressive Depth Up-scaling via Optimal Transport
About
Scaling Large Language Models (LLMs) yields performance gains but incurs substantial training costs. Depth up-scaling offers training efficiency by adding new layers to pre-trained models. However, most existing methods copy or average weights from base layers, neglecting neuron permutation differences. This limitation can potentially cause misalignment that harms performance. Inspired by applying Optimal Transport (OT) for neuron alignment, we propose Optimal Transport Depth Up-Scaling (OpT-DeUS). OpT-DeUS aligns and fuses Transformer blocks in adjacent base layers via OT for new layer creation, to mitigate neuron permutation mismatch between layers. OpT-DeUS achieves better overall performance and offers improved training efficiency than existing methods for continual pre-training and supervised fine-tuning across different model sizes. To further evaluate the impact of interpolation positions, our extensive analysis shows that inserting new layers closer to the top results in higher training efficiency due to shorter back-propagation time while obtaining additional performance gains.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multi-task Language Understanding | MMLU | Accuracy37.64 | 842 | |
| Language Modeling | WikiText-103 (test) | Perplexity7.73 | 524 | |
| Boolean Question Answering | BoolQ | Accuracy65.63 | 307 | |
| Commonsense Reasoning | WinoGrande | Accuracy61.56 | 231 | |
| Question Answering | ARC | Accuracy67.09 | 154 | |
| Logical reasoning | LogiQA | Accuracy23.81 | 84 | |
| Physical Reasoning | PIQA | Accuracy75.52 | 44 | |
| Commonsense Question Answering | CSQA | Accuracy49.55 | 44 | |
| Zero-shot Question Answering and Reasoning | Evaluation Suite Zero-shot (ARC, LogiQA, Wino, CSQA, BoolQ, PIQA, MMLU) | ARC83.88 | 21 | |
| Language Modeling | Wikipedia | Perplexity11.72 | 14 |