Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Geometric Interpretation of Layer Normalization and a Comparative Analysis with RMSNorm

About

This paper presents a novel geometric interpretation of LayerNorm and explores how LayerNorm influences the norm and orientation of hidden vectors in the representation space. With these geometric insights, we prepare the foundation for comparing LayerNorm with RMSNorm. We show that the definition of LayerNorm is innately linked to the uniform vector, defined as $\boldsymbol{1} = [1, 1, 1, 1, \cdots, 1]^T \in \mathbb{R}^d$. We then show that the standardization step in LayerNorm can be understood in three simple steps: (i) remove the component of a vector along the uniform vector, (ii) normalize the remaining vector, and (iii) scale the resultant vector by $\sqrt{d}$, where $d$ is the dimensionality of the representation space. We also provide additional insights into how LayerNorm operates at inference time. Finally, we compare the hidden representations of LayerNorm-based LLMs with models trained using RMSNorm and show that all LLMs naturally operate orthogonal to the uniform vector at inference time, that is, on average they do not have a component along the uniform vector during inference. This presents the first mechanistic evidence that removing the component along the uniform vector in LayerNorm is a redundant step. These results advocate for using RMSNorm over LayerNorm which is also more computationally efficient.

Akshat Gupta, Atahan Ozdemir, Gopala Anumanchipalli• 2024

Related benchmarks

TaskDatasetResultRank
Object HallucinationPOPE Adversarial
Accuracy87.6
288
Object Hallucination EvaluationPOPE Adversarial
Accuracy0.865
55
Image CaptioningPOPE Adversarial
CIDEr118.5
50
Object HallucinationCOCO Captions 2014 (val)
CHAIR (scene)9.5
35
Showing 4 of 4 rows

Other info

Follow for update