Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models

About

Recently years have witnessed a rapid development of large language models (LLMs). Despite the strong ability in many language-understanding tasks, the heavy computational burden largely restricts the application of LLMs especially when one needs to deploy them onto edge devices. In this paper, we propose a quantization-aware low-rank adaptation (QA-LoRA) algorithm. The motivation lies in the imbalanced degrees of freedom of quantization and adaptation, and the solution is to use group-wise operators which increase the degree of freedom of quantization meanwhile decreasing that of adaptation. QA-LoRA is easily implemented with a few lines of code, and it equips the original LoRA with two-fold abilities: (i) during fine-tuning, the LLM's weights are quantized (e.g., into INT4) to reduce time and memory usage; (ii) after fine-tuning, the LLM and auxiliary weights are naturally integrated into a quantized model without loss of accuracy. We apply QA-LoRA to the LLaMA and LLaMA2 model families and validate its effectiveness in different fine-tuning datasets and downstream scenarios. Code will be made available at https://github.com/yuhuixu1993/qa-lora.

Yuhui Xu, Lingxi Xie, Xiaotao Gu, Xin Chen, Heng Chang, Hengheng Zhang, Zhengsu Chen, Xiaopeng Zhang, Qi Tian• 2023

Related benchmarks

TaskDatasetResultRank
Language UnderstandingMMLU 5-shot (test)
Accuracy49.2
149
Language UnderstandingMMLU 5-shot
Accuracy54.2
132
Question AnsweringCommonsenseQA (CSQA)
Accuracy64.6
124
Language UnderstandingMMLU 0-shot
Accuracy52.3
110
Commonsense ReasoningCommon Sense QA (test)
ARC-C Accuracy (5-shot)58
20
Showing 5 of 5 rows

Other info

Follow for update