Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Stronger Normalization-Free Transformers

About

Although normalization layers have long been viewed as indispensable components of deep learning architectures, the recent introduction of Dynamic Tanh (DyT) has demonstrated that alternatives are possible. The point-wise function DyT constrains extreme values for stable convergence and reaches normalization-level performance; this work seeks further for function designs that can surpass it. We first study how the intrinsic properties of point-wise functions influence training and performance. Building on these findings, we conduct a large-scale search for a more effective function design. Through this exploration, we introduce $\mathrm{Derf}(x) = \mathrm{erf}(\alpha x + s)$, where $\mathrm{erf}(x)$ is the rescaled Gaussian cumulative distribution function, and identify it as the most performant design. Derf outperforms LayerNorm, RMSNorm, and DyT across a wide range of domains, including vision (image recognition and generation), speech representation, and DNA sequence modeling. Our findings suggest that the performance gains of Derf largely stem from its improved generalization rather than stronger fitting capacity. Its simplicity and stronger performance make Derf a practical choice for normalization-free Transformer architectures.

Mingzhi Chen, Taiming Lu, Jiachen Zhu, Mingjie Sun, Zhuang Liu• 2025

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet-1k 1.0 (test)
Top-1 Accuracy83.8
197
Image GenerationImageNet-1k (val)
FID18.92
84
Language ModelingOpenWebText (val)
Validation Loss2.94
70
Image ClassificationImageNet-1k (val)
Top-1 Accuracy83.8
34
DNA classificationGenomicBenchmarks
Accuracy87.3
14
Speech pretrainingLibriSpeech (val)
Validation Loss1.9
14
Showing 6 of 6 rows

Other info

GitHub

Follow for update