Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Res-Tuning: A Flexible and Efficient Tuning Paradigm via Unbinding Tuner from Backbone

About

Parameter-efficient tuning has become a trend in transferring large-scale foundation models to downstream applications. Existing methods typically embed some light-weight tuners into the backbone, where both the design and the learning of the tuners are highly dependent on the base model. This work offers a new tuning paradigm, dubbed Res-Tuning, which intentionally unbinds tuners from the backbone. With both theoretical and empirical evidence, we show that popular tuning approaches have their equivalent counterparts under our unbinding formulation, and hence can be integrated into our framework effortlessly. Thanks to the structural disentanglement, we manage to free the design of tuners from the network architecture, facilitating flexible combination of various tuning strategies. We further propose a memory-efficient variant of Res-Tuning, where the bypass i.e., formed by a sequence of tuners) is effectively detached from the main branch, such that the gradients are back-propagated only to the tuners but not to the backbone. Such a detachment also allows one-time backbone forward for multi-task inference. Extensive experiments on both discriminative and generative tasks demonstrate the superiority of our method over existing alternatives from the perspectives of efficacy and efficiency. Project page: $\href{https://res-tuning.github.io/}{\textit{https://res-tuning.github.io/}}$.

Zeyinzi Jiang, Chaojie Mao, Ziyuan Huang, Ao Ma, Yiliang Lv, Yujun Shen, Deli Zhao, Jingren Zhou• 2023

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)
Accuracy93.25
3518
Image ClassificationVTAB 1K
Overall Mean Accuracy76.32
204
Text ClassificationSST-2
Accuracy94.56
121
Image ClassificationVTAB-1K 1.0 (test)
Natural Accuracy82.3
102
Image ClassificationImageNet Domain Generalization (Source: ImageNet, Targets: ImageNetV2, ImageNet-Sketch, ImageNet-A, ImageNet-R) (test)
Accuracy (ImageNetV2)66.58
53
Text ClassificationMNLI
Accuracy87.45
32
Fine-grained Visual CategorizationFGVC (CUB-200-2011, NABirds, Oxford Flowers, Stanford Cars, Stanford Dogs) (test)
CUB-200-2011 Accuracy89.66
32
Visual Task AdaptationVTAB 1k (test)
CIFAR-100 Accuracy75.2
15
Text-to-Image GenerationCOCO 2017 (val)
FID13.96
8
Showing 9 of 9 rows

Other info

Code

Follow for update