Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

LoRA: Low-Rank Adaptation of Large Language Models

About

An important paradigm of natural language processing consists of large-scale pre-training on general domain data and adaptation to particular tasks or domains. As we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes less feasible. Using GPT-3 175B as an example -- deploying independent instances of fine-tuned models, each with 175B parameters, is prohibitively expensive. We propose Low-Rank Adaptation, or LoRA, which freezes the pre-trained model weights and injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks. Compared to GPT-3 175B fine-tuned with Adam, LoRA can reduce the number of trainable parameters by 10,000 times and the GPU memory requirement by 3 times. LoRA performs on-par or better than fine-tuning in model quality on RoBERTa, DeBERTa, GPT-2, and GPT-3, despite having fewer trainable parameters, a higher training throughput, and, unlike adapters, no additional inference latency. We also provide an empirical investigation into rank-deficiency in language model adaptation, which sheds light on the efficacy of LoRA. We release a package that facilitates the integration of LoRA with PyTorch models and provide our implementations and model checkpoints for RoBERTa, DeBERTa, and GPT-2 at https://github.com/microsoft/LoRA.

Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen• 2021

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)
Accuracy79.47
3518
Semantic segmentationADE20K (val)
mIoU51.1
2888
Language ModelingWikiText2
Perplexity3.82
2839
Object DetectionCOCO 2017 (val)--
2643
Commonsense ReasoningHellaSwag
Accuracy94.85
1891
Language ModelingWikiText-2--
1624
Visual Question AnsweringVizWiz
Accuracy56.33
1525
Object Hallucination EvaluationPOPE
Accuracy88.1
1455
Mathematical ReasoningGSM8K
Accuracy74.9
1362
Visual Question AnsweringTextVQA
Accuracy81.04
1285
Showing 10 of 1513 rows
...

Other info

Code

Follow for update