Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

PACE: Marrying generalization in PArameter-efficient fine-tuning with Consistency rEgularization

About

Parameter-Efficient Fine-Tuning (PEFT) effectively adapts pre-trained transformers to downstream tasks. However, the optimization of tasks performance often comes at the cost of generalizability in fine-tuned models. To address this issue, we theoretically connect smaller weight gradient norms during training and larger datasets to the improvements in model generalization. Motivated by this connection, we propose reducing gradient norms for enhanced generalization and aligning fine-tuned model with the pre-trained counterpart to retain knowledge from large-scale pre-training data. Yet, naive alignment does not guarantee gradient reduction and can potentially cause gradient explosion, complicating efforts to manage gradients. To address such an issue, we propose PACE, marrying generalization of PArameter-efficient fine-tuning with Consistency rEgularization. We perturb features learned from the adapter with the multiplicative noise and ensure the fine-tuned model remains consistent for same sample under different perturbations. Theoretical analysis shows that PACE not only implicitly regularizes gradients for enhanced generalization, but also implicitly aligns the fine-tuned and pre-trained models to retain knowledge. Experimental evidence supports our theories. PACE surpasses existing PEFT methods in visual adaptation tasks (VTAB-1k, FGVC, few-shot learning, domain adaptation) showcasing its potential for resource-efficient fine-tuning. It also improves LoRA in text classification (GLUE) and mathematical reasoning (GSM-8K). The code is available at https://github.com/MaxwellYaoNi/PACE

Yao Ni, Shan Zhang, Piotr Koniusz• 2024

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy78.77
983
Image ClassificationFGVC-Aircraft (test)
Accuracy62.5
231
Image ClassificationVTAB 1K
Overall Mean Accuracy79
204
Image ClassificationImageNet-1K
Accuracy80.1
190
Image ClassificationFood101 (test)
Accuracy77.8
87
Fine-grained Visual CategorizationFGVC (CUB-200-2011, NABirds, Oxford Flowers, Stanford Cars, Stanford Dogs) (test)
CUB-200-2011 Accuracy89.8
32
Image ClassificationFGVC (test)
Accuracy81.9
25
Domain AdaptationImageNet Domain Adaptation
Accuracy (ImageNet-Sketch)45.8
24
Image ClassificationCIFAR-100 VTAB-1K
Accuracy79
24
Domain AdaptationImageNet Domain Adaptation (Source: ImageNet-1K, Targets: Sketch, V2, A, R) 1.0 (test)
Accuracy on Source Domain (1K)0.79
9
Showing 10 of 14 rows

Other info

Code

Follow for update