Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection

About

Training Large Language Models (LLMs) presents significant memory challenges, predominantly due to the growing size of weights and optimizer states. Common memory-reduction approaches, such as low-rank adaptation (LoRA), add a trainable low-rank matrix to the frozen pre-trained weight in each layer, reducing trainable parameters and optimizer states. However, such approaches typically underperform training with full-rank weights in both pre-training and fine-tuning stages since they limit the parameter search to a low-rank subspace and alter the training dynamics, and further, may require full-rank warm start. In this work, we propose Gradient Low-Rank Projection (GaLore), a training strategy that allows full-parameter learning but is more memory-efficient than common low-rank adaptation methods such as LoRA. Our approach reduces memory usage by up to 65.5% in optimizer states while maintaining both efficiency and performance for pre-training on LLaMA 1B and 7B architectures with C4 dataset with up to 19.7B tokens, and on fine-tuning RoBERTa on GLUE tasks. Our 8-bit GaLore further reduces optimizer memory by up to 82.5% and total training memory by 63.3%, compared to a BF16 baseline. Notably, we demonstrate, for the first time, the feasibility of pre-training a 7B model on consumer GPUs with 24GB memory (e.g., NVIDIA RTX 4090) without model parallel, checkpointing, or offloading strategies.

Jiawei Zhao, Zhenyu Zhang, Beidi Chen, Zhangyang Wang, Anima Anandkumar, Yuandong Tian• 2024

Related benchmarks

TaskDatasetResultRank
Natural Language UnderstandingGLUE (dev)
SST-2 (Acc)95.6
504
Natural Language UnderstandingGLUE (test)
SST-2 Accuracy94
416
Language ModelingC4 (val)
PPL15.64
392
Mathematical ReasoningGSM8K
Accuracy74.2
351
Multi-turn Dialogue EvaluationMT-Bench
Overall Score5.83
331
Commonsense ReasoningCommon Sense Reasoning Tasks
Avg Score76.3
241
Multitask Language UnderstandingMMLU
Accuracy66.3
206
Natural Language UnderstandingGLUE (val)
SST-294.04
170
Language ModelingFineWeb (val)
Validation Loss2.118
156
Language UnderstandingMMLU (test)--
136
Showing 10 of 32 rows

Other info

Follow for update