Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Training Neural Networks from Scratch with Parallel Low-Rank Adapters

About

The scalability of deep learning models is fundamentally limited by computing resources, memory, and communication. Although methods like low-rank adaptation (LoRA) have reduced the cost of model finetuning, its application in model pre-training remains largely unexplored. This paper explores extending LoRA to model pre-training, identifying the inherent constraints and limitations of standard LoRA in this context. We introduce LoRA-the-Explorer (LTE), a novel bi-level optimization algorithm designed to enable parallel training of multiple low-rank heads across computing nodes, thereby reducing the need for frequent synchronization. Our approach includes extensive experimentation on vision transformers using various vision datasets, demonstrating that LTE is competitive with standard pre-training.

Minyoung Huh, Brian Cheung, Jeremy Bernstein, Phillip Isola, Pulkit Agrawal• 2024

Related benchmarks

TaskDatasetResultRank
Natural Language UnderstandingGLUE
SST-296.2
452
Showing 1 of 1 rows

Other info

Follow for update