Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

On-the-Fly Adaptation to Quantization: Configuration-Aware LoRA for Efficient Fine-Tuning of Quantized LLMs

About

As increasingly large pre-trained models are released, deploying them on edge devices for privacy-preserving applications requires effective compression. Recent works combine quantization with the fine-tuning of high-precision LoRA adapters, which can substantially reduce model size while mitigating the accuracy loss from quantization. However, edge devices have inherently heterogeneous capabilities, while performing configuration-wise fine-tuning for every quantization setting is computationally prohibitive. In this paper, we propose CoA-LoRA, a method that dynamically adjusts the LoRA adapter to arbitrary quantization configurations (i.e., the per-layer bit-width choices of a pre-trained model) without requiring repeated fine-tuning. This is accomplished via a configuration-aware model that maps each configuration to its low-rank adjustments. The effectiveness of this model critically depends on the training configuration set, a collection of configurations chosen to cover different total bit-width budgets. However, constructing a high-quality configuration set is non-trivial. We therefore design a Pareto-based configuration search that iteratively optimizes the training configuration set, yielding more precise low-rank adjustments. Our experiments demonstrate that, unlike the state-of-the-art methods that require fine-tuning a separate LoRA adapter for each configuration, CoA-LoRA incurs no additional time cost while achieving comparable or even superior performance to those methods.

Rongguang Ye, Ming Tang, Edith C. H. Ngai• 2025

Related benchmarks

TaskDatasetResultRank
Question AnsweringARC Easy--
597
Natural Language InferenceRTE
Accuracy62.5
448
Commonsense ReasoningWinoGrande
Accuracy67.97
372
Question AnsweringARC Challenge
Accuracy (ARC)41.41
142
Natural Language InferenceaNLI
Accuracy38.28
65
Reading ComprehensionBoolQ
Accuracy (BoolQ)78.91
55
Physical Commonsense ReasoningPIQA
Accuracy76.56
45
Language ModelingLLaMA-2 13B
Perplexity (PPL)6.99
32
Aggregated Downstream EvaluationANLI, BoolQ, Winogrande, RTE, PiQA, ARC-Easy, ARC-Challenge
Average Accuracy61.94
8
Language ModelingQwen2.5-1.5B
HV47.9
5
Showing 10 of 13 rows

Other info

Follow for update