Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Peri-LN: Revisiting Normalization Layer in the Transformer Architecture

About

Selecting a layer normalization (LN) strategy that stabilizes training and speeds convergence in Transformers remains difficult, even for today's large language models (LLM). We present a comprehensive analytical foundation for understanding how different LN strategies influence training dynamics in large-scale Transformers. Until recently, Pre-LN and Post-LN have long dominated practices despite their limitations in large-scale training. However, several open-source models have recently begun silently adopting a third strategy without much explanation. This strategy places normalization layer peripherally around sublayers, a design we term Peri-LN. While Peri-LN has demonstrated promising performance, its precise mechanisms and benefits remain almost unexplored. Our in-depth analysis delineates the distinct behaviors of LN strategies, showing how each placement shapes activation variance and gradient propagation. To validate our theoretical insight, we conduct extensive experiments on Transformers up to $3.2$B parameters, showing that Peri-LN consistently achieves more balanced variance growth, steadier gradient flow, and convergence stability. Our results suggest that Peri-LN warrants broader consideration for large-scale Transformer architectures, providing renewed insights into the optimal placement of LN.

Jeonghoon Kim, Byeongchan Lee, Cheonbok Park, Yeontaek Oh, Beomjun Kim, Taehwan Yoo, Seongjin Shin, Dongyoon Han, Jinwoo Shin, Kang Min Yoo• 2025

Related benchmarks

TaskDatasetResultRank
Question Answering and ReasoningDownstream Reasoning Suite (Arc-e, PIQA, Hellaswag, OpenBookQA, Winogrande, MMLU, BoolQ)
ARC-e47.9
14
Language ModelingPretraining Dataset
Train Loss (PT)3.165
10
Supervised Fine-tuningSFT (train)
SFT Train Loss2.614
5
Zero-shot EvaluationZero-shot Downstream Tasks (Arc-e, PIQA, Hellaswag, OpenBookQA, Winogrande, MMLU, BoolQ) Llama-1B Benchmark Suite (test)
Arc-e Accuracy31.63
5
Supervised Fine-tuningSFT (evaluation)
SFT Evaluation Loss3.178
5
Language Modeling and Zero-shot ReasoningStandard LLM Evaluation Suite ARC-e, PIQA, Hellaswag, OpenBookQA, Winogrande, MMLU, BoolQ
PT Eval Loss3.279
5
Pre-trainingPre-training (evaluation)
Pre-training Eval Loss3.279
5
Zero-shot Downstream Task EvaluationDownstream Evaluation Suite (Arc-e, PIQA, Hellaswag, OpenBookQA, Winogrande, MMLU, BoolQ)
Arc-e50.19
4
Language Modeling20B token pretraining corpus
PT Train Loss2.811
2
Supervised Fine-tuningSupervised Fine-Tuning (SFT)
SFT Training Loss2.477
2
Showing 10 of 10 rows

Other info

Follow for update