Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ApiQ: Finetuning of 2-Bit Quantized Large Language Model

About

Memory-efficient finetuning of large language models (LLMs) has recently attracted huge attention with the increasing size of LLMs, primarily due to the constraints posed by GPU memory limitations and the effectiveness of these methods compared to full finetuning. Despite the advancements, current strategies for memory-efficient finetuning, such as QLoRA, exhibit inconsistent performance across diverse bit-width quantizations and multifaceted tasks. This inconsistency largely stems from the detrimental impact of the quantization process on preserved knowledge, leading to catastrophic forgetting and undermining the utilization of pretrained models for finetuning purposes. In this work, we introduce a novel quantization framework, ApiQ, designed to restore the lost information from quantization by concurrently initializing the LoRA components and quantizing the weights of LLMs. This approach ensures the maintenance of the original LLM's activation precision while mitigating the error propagation from shallower into deeper layers. Through comprehensive evaluations conducted on a spectrum of language tasks with various LLMs, ApiQ demonstrably minimizes activation error during quantization. Consequently, it consistently achieves superior finetuning results across various bit-widths.

Baohao Liao, Christian Herold, Shahram Khadivi, Christof Monz• 2024

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity6.8
2839
Mathematical ReasoningMathQA
Accuracy36.18
305
Commonsense ReasoningARC Challenge
Accuracy36.68
190
Math ReasoningGSM8K
Accuracy30.09
187
Commonsense ReasoningARC-C
Accuracy46.58
172
Language ModelingWikiText2
Perplexity5.04
162
Arithmetic ReasoningGSM8K (test)
Accuracy52.4
129
Question AnsweringMathQA (test)
Accuracy36.18
41
SummarizationCNN/DailyMail (test)
ROUGE-L18.04
33
Arithmetic ReasoningAQuA, GSM8K, MAWPS, SVAMP
AQuA Accuracy26
31
Showing 10 of 12 rows

Other info

Follow for update