Beyond Muon: MUD (MomentUm Decorrelation) for Faster Transformer Training
About
Orthogonalized-momentum optimizers such as Muon improve transformer training by approximately whitening/orthogonalizing matrix-valued momentum updates via a short polar-decomposition iteration. However, polar-factor approximations typically require multiple large matrix multiplications, and the resulting overhead can be substantial and hardware-dependent. We introduce MUD (MomentUm Decorrelation), a complementary whitening approach that replaces Muon's polar update with a triangular (Cholesky-like) whitening surrogate inspired by classical Gram--Schmidt and Gauss-Seidel ideas. We show that row-orthonormal matrices are fixed points of the MUD map, relate the inner step to symmetric Gauss-Seidel preconditioning of the Gram matrix, and prove quadratic local convergence near the fixed point. In terms of time-to-perplexity, MUD yields consistent 10-50\% wall-clock improvements over tuned AdamW and Muon in time-to-perplexity, typically converging slightly slower per step than Muon but with substantially lower optimizer overhead -- relative to Muon, MUD improves peak tokens/s by roughly $1.3-2.6\times$ across most settings and up to nearly $3\times$ on GPT-2 large on an A100. We also demonstrate training a ESM-2 150M protein language model, where MUD matches Muon-level validation perplexity in significantly less wall-clock time.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Language Modeling | NanoGPT OpenWebText | Throughput (tokens/s)2.95e+5 | 24 | |
| Language Modeling | WikiText-103 | Throughput (tokens/s)1.16e+5 | 21 | |
| Language Modeling | FineWeb-Edu | Throughput (tokens/s)5.22e+4 | 15 | |
| Image Classification | CIFAR-10 AIRBench (val) | Validation Accuracy94.07 | 8 | |
| Masked Language Modeling | omg prot50 (val) | Wall-clock Time (17.5 Target PPL)78 | 4 |