QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models
About
Recently years have witnessed a rapid development of large language models (LLMs). Despite the strong ability in many language-understanding tasks, the heavy computational burden largely restricts the application of LLMs especially when one needs to deploy them onto edge devices. In this paper, we propose a quantization-aware low-rank adaptation (QA-LoRA) algorithm. The motivation lies in the imbalanced degrees of freedom of quantization and adaptation, and the solution is to use group-wise operators which increase the degree of freedom of quantization meanwhile decreasing that of adaptation. QA-LoRA is easily implemented with a few lines of code, and it equips the original LoRA with two-fold abilities: (i) during fine-tuning, the LLM's weights are quantized (e.g., into INT4) to reduce time and memory usage; (ii) after fine-tuning, the LLM and auxiliary weights are naturally integrated into a quantized model without loss of accuracy. We apply QA-LoRA to the LLaMA and LLaMA2 model families and validate its effectiveness in different fine-tuning datasets and downstream scenarios. Code will be made available at https://github.com/yuhuixu1993/qa-lora.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Language Understanding | MMLU 5-shot (test) | Accuracy49.2 | 149 | |
| Language Understanding | MMLU 5-shot | Accuracy54.2 | 132 | |
| Question Answering | CommonsenseQA (CSQA) | Accuracy64.6 | 124 | |
| Language Understanding | MMLU 0-shot | Accuracy52.3 | 110 | |
| Commonsense Reasoning | Common Sense QA (test) | ARC-C Accuracy (5-shot)58 | 20 |