Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

COAP: Memory-Efficient Training with Correlation-Aware Gradient Projection

About

Training large-scale neural networks in vision, and multimodal domains demands substantial memory resources, primarily due to the storage of optimizer states. While LoRA, a popular parameter-efficient method, reduces memory usage, it often suffers from suboptimal performance due to the constraints of low-rank updates. Low-rank gradient projection methods (e.g., GaLore, Flora) reduce optimizer memory by projecting gradients and moment estimates into low-rank spaces via singular value decomposition or random projection. However, they fail to account for inter-projection correlation, causing performance degradation, and their projection strategies often incur high computational costs. In this paper, we present COAP (Correlation-Aware Gradient Projection), a memory-efficient method that minimizes computational overhead while maintaining training performance. Evaluated across various vision, language, and multimodal tasks, COAP outperforms existing methods in both training speed and model performance. For LLaMA-1B, it reduces optimizer memory by 61% with only 2% additional time cost, achieving the same PPL as AdamW. With 8-bit quantization, COAP cuts optimizer memory by 81% and achieves 4x speedup over GaLore for LLaVA-v1.5-7B fine-tuning, while delivering higher accuracy.

Jinqi Xiao, Shen Sang, Tiancheng Zhi, Jing Liu, Qing Yan, Yuqian Zhang, Linjie Luo, Bo Yuan• 2024

Related benchmarks

TaskDatasetResultRank
Image GenerationImageNet-1k (val)
FID2.1
84
Multimodal Question AnsweringScienceQA--
35
Human pose-conditioned image generationSingle-person images derived from Stable Diffusion XL (val)
Optimizer Memory (GB)0.5
15
Language ModelingC4 (train)
PPL15.28
8
Showing 4 of 4 rows

Other info

Code

Follow for update