Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Beyond Muon: MUD (MomentUm Decorrelation) for Faster Transformer Training

About

Orthogonalized-momentum optimizers such as Muon improve transformer training by approximately whitening/orthogonalizing matrix-valued momentum updates via a short polar-decomposition iteration. However, polar-factor approximations typically require multiple large matrix multiplications, and the resulting overhead can be substantial and hardware-dependent. We introduce MUD (MomentUm Decorrelation), a complementary whitening approach that replaces Muon's polar update with a triangular (Cholesky-like) whitening surrogate inspired by classical Gram--Schmidt and Gauss-Seidel ideas. We show that row-orthonormal matrices are fixed points of the MUD map, relate the inner step to symmetric Gauss-Seidel preconditioning of the Gram matrix, and prove quadratic local convergence near the fixed point. In terms of time-to-perplexity, MUD yields consistent 10-50\% wall-clock improvements over tuned AdamW and Muon in time-to-perplexity, typically converging slightly slower per step than Muon but with substantially lower optimizer overhead -- relative to Muon, MUD improves peak tokens/s by roughly $1.3-2.6\times$ across most settings and up to nearly $3\times$ on GPT-2 large on an A100. We also demonstrate training a ESM-2 150M protein language model, where MUD matches Muon-level validation perplexity in significantly less wall-clock time.

Ben S. Southworth, Stephen Thomas• 2026

Related benchmarks

TaskDatasetResultRank
Language ModelingNanoGPT OpenWebText
Throughput (tokens/s)2.95e+5
24
Language ModelingWikiText-103
Throughput (tokens/s)1.16e+5
21
Language ModelingFineWeb-Edu
Throughput (tokens/s)5.22e+4
15
Image ClassificationCIFAR-10 AIRBench (val)
Validation Accuracy94.07
8
Masked Language Modelingomg prot50 (val)
Wall-clock Time (17.5 Target PPL)78
4
Showing 5 of 5 rows

Other info

Follow for update