Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

NOLA: Compressing LoRA using Linear Combination of Random Basis

About

Fine-tuning Large Language Models (LLMs) and storing them for each downstream task or domain is impractical because of the massive model size (e.g., 350GB in GPT-3). Current literature, such as LoRA, showcases the potential of low-rank modifications to the original weights of an LLM, enabling efficient adaptation and storage for task-specific models. These methods can reduce the number of parameters needed to fine-tune an LLM by several orders of magnitude. Yet, these methods face two primary limitations: (1) the parameter count is lower-bounded by the rank one decomposition, and (2) the extent of reduction is heavily influenced by both the model architecture and the chosen rank. We introduce NOLA, which overcomes the rank one lower bound present in LoRA. It achieves this by re-parameterizing the low-rank matrices in LoRA using linear combinations of randomly generated matrices (basis) and optimizing the linear mixture coefficients only. This approach allows us to decouple the number of trainable parameters from both the choice of rank and the network architecture. We present adaptation results using GPT-2, LLaMA-2, and ViT in natural language and computer vision tasks. NOLA performs as well as LoRA models with much fewer number of parameters compared to LoRA with rank one, the best compression LoRA can archive. Particularly, on LLaMA-2 70B, our method is almost 20 times more compact than the most compressed LoRA without degradation in accuracy. Our code is available here: https://github.com/UCDvision/NOLA

Soroush Abbasi Koohpayegani, KL Navaneet, Parsa Nooralinejad, Soheil Kolouri, Hamed Pirsiavash• 2023

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet-1K
Top-1 Acc77.4
1239
Image ClassificationSUN397--
441
Image ClassificationCUB-200--
106
Image ClassificationOxford Pets
Top-1 Acc90.4
94
Language UnderstandingMMLU
MMLU Accuracy56.1
77
Multi-task Language UnderstandingMMLU
MMLU Accuracy51.8
59
Image ClassificationCIFAR-10 (full)
Top-1 Acc97.4
23
Natural language generationE2E
METEOR0.468
17
Image ClassificationFood-101 10 samples/class
Top-1 Accuracy82.5
12
Image ClassificationTiny-ImageNet 10 samples/class
Top-1 Accuracy84.3
12
Showing 10 of 16 rows

Other info

Follow for update