Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

On Surprising Effectiveness of Masking Updates in Adaptive Optimizers

About

Training large language models (LLMs) relies almost exclusively on dense adaptive optimizers with increasingly sophisticated preconditioners. We challenge this by showing that randomly masking parameter updates can be highly effective, with a masked variant of RMSProp consistently outperforming recent state-of-the-art optimizers. Our analysis reveals that the random masking induces a curvature-dependent geometric regularization that smooths the optimization trajectory. Motivated by this finding, we introduce Momentum-aligned gradient masking (Magma), which modulates the masked updates using momentum-gradient alignment. Extensive LLM pre-training experiments show that Magma is a simple drop-in replacement for adaptive optimizers with consistent gains and negligible computational overhead. Notably, for the 1B model size, Magma reduces perplexity by over 19\% and 9\% compared to Adam and Muon, respectively.

Taejong Joo, Wenhan Xia, Cheolmin Kim, Ming Zhang, Eugene Ie• 2026

Related benchmarks

TaskDatasetResultRank
Language Model Pre-trainingC4 Llama 2 pre-training (val)
Perplexity13.19
47
Showing 1 of 1 rows

Other info

Follow for update