PACE: Marrying generalization in PArameter-efficient fine-tuning with Consistency rEgularization
About
Parameter-Efficient Fine-Tuning (PEFT) effectively adapts pre-trained transformers to downstream tasks. However, the optimization of tasks performance often comes at the cost of generalizability in fine-tuned models. To address this issue, we theoretically connect smaller weight gradient norms during training and larger datasets to the improvements in model generalization. Motivated by this connection, we propose reducing gradient norms for enhanced generalization and aligning fine-tuned model with the pre-trained counterpart to retain knowledge from large-scale pre-training data. Yet, naive alignment does not guarantee gradient reduction and can potentially cause gradient explosion, complicating efforts to manage gradients. To address such an issue, we propose PACE, marrying generalization of PArameter-efficient fine-tuning with Consistency rEgularization. We perturb features learned from the adapter with the multiplicative noise and ensure the fine-tuned model remains consistent for same sample under different perturbations. Theoretical analysis shows that PACE not only implicitly regularizes gradients for enhanced generalization, but also implicitly aligns the fine-tuned and pre-trained models to retain knowledge. Experimental evidence supports our theories. PACE surpasses existing PEFT methods in visual adaptation tasks (VTAB-1k, FGVC, few-shot learning, domain adaptation) showcasing its potential for resource-efficient fine-tuning. It also improves LoRA in text classification (GLUE) and mathematical reasoning (GSM-8K). The code is available at https://github.com/MaxwellYaoNi/PACE
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Mathematical Reasoning | GSM8K | Accuracy78.77 | 983 | |
| Image Classification | FGVC-Aircraft (test) | Accuracy62.5 | 231 | |
| Image Classification | VTAB 1K | Overall Mean Accuracy79 | 204 | |
| Image Classification | ImageNet-1K | Accuracy80.1 | 190 | |
| Image Classification | Food101 (test) | Accuracy77.8 | 87 | |
| Fine-grained Visual Categorization | FGVC (CUB-200-2011, NABirds, Oxford Flowers, Stanford Cars, Stanford Dogs) (test) | CUB-200-2011 Accuracy89.8 | 32 | |
| Image Classification | FGVC (test) | Accuracy81.9 | 25 | |
| Domain Adaptation | ImageNet Domain Adaptation | Accuracy (ImageNet-Sketch)45.8 | 24 | |
| Image Classification | CIFAR-100 VTAB-1K | Accuracy79 | 24 | |
| Domain Adaptation | ImageNet Domain Adaptation (Source: ImageNet-1K, Targets: Sketch, V2, A, R) 1.0 (test) | Accuracy on Source Domain (1K)0.79 | 9 |