Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

On the Crucial Role of Initialization for Matrix Factorization

About

This work revisits the classical low-rank matrix factorization problem and unveils the critical role of initialization in shaping convergence rates for such nonconvex and nonsmooth optimization. We introduce Nystrom initialization, which significantly improves the global convergence of Scaled Gradient Descent (ScaledGD) in both symmetric and asymmetric matrix factorization tasks. Specifically, we prove that ScaledGD with Nystrom initialization achieves quadratic convergence in cases where only linear rates were previously known. Furthermore, we extend this initialization to low-rank adapters (LoRA) commonly used for finetuning foundation models. Our approach, NoRA, i.e., LoRA with Nystrom initialization, demonstrates superior performance across various downstream tasks and model scales, from 1B to 7B parameters, in large language and diffusion models.

Bingcong Li, Liang Zhang, Aryan Mokhtari, Niao He• 2024

Related benchmarks

TaskDatasetResultRank
Common Sense ReasoningBoolQ
Accuracy71.16
131
Showing 1 of 1 rows

Other info

Follow for update