Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Task Residual for Tuning Vision-Language Models

About

Large-scale vision-language models (VLMs) pre-trained on billion-level data have learned general visual representations and broad visual concepts. In principle, the well-learned knowledge structure of the VLMs should be inherited appropriately when being transferred to downstream tasks with limited data. However, most existing efficient transfer learning (ETL) approaches for VLMs either damage or are excessively biased towards the prior knowledge, e.g., prompt tuning (PT) discards the pre-trained text-based classifier and builds a new one while adapter-style tuning (AT) fully relies on the pre-trained features. To address this, we propose a new efficient tuning approach for VLMs named Task Residual Tuning (TaskRes), which performs directly on the text-based classifier and explicitly decouples the prior knowledge of the pre-trained models and new knowledge regarding a target task. Specifically, TaskRes keeps the original classifier weights from the VLMs frozen and obtains a new classifier for the target task by tuning a set of prior-independent parameters as a residual to the original one, which enables reliable prior knowledge preservation and flexible task-specific knowledge exploration. The proposed TaskRes is simple yet effective, which significantly outperforms previous ETL methods (e.g., PT and AT) on 11 benchmark datasets while requiring minimal effort for the implementation. Our code is available at https://github.com/geekyutao/TaskRes.

Tao Yu, Zhihe Lu, Xin Jin, Zhibo Chen, Xinchao Wang• 2022

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet 1k (test)
Top-1 Accuracy73
798
Image ClassificationDTD
Accuracy67.57
487
Image ClassificationImageNet
Top-1 Accuracy73.07
324
Image ClassificationImageNet--
184
Image ClassificationCaltech101
Base Accuracy92.9
129
Image ClassificationImageNet (INet)
Accuracy64.7
50
Image ClassificationImageNet Robustness Generalization Suite Sketch A R V2
Top-1 Acc (V2)65.3
31
Few-shot Image Classification11 datasets average CLIP-based (ImageNet, Caltech101, OxfordPets, StanfordCars, Flowers102, Food101, FGVCAircraft, SUN397, DTD, EuroSAT, UCF101)
Accuracy74.42
30
Image ClassificationImageNet 1k (source)
Top-1 Acc70.84
28
Image ClassificationImageNet Distribution Shifts Average of ImageNet-V2, ImageNet-R, ImageNet-Sketch, ObjectNet, and ImageNet-A (test)
Average Accuracy55.35
19
Showing 10 of 23 rows

Other info

Follow for update