Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Minimal Interaction Separated Tuning: A New Paradigm for Visual Adaptation

About

The rapid scaling of large vision pretrained models makes fine-tuning tasks more and more difficult on devices with low computational resources. We explore a new visual adaptation paradigm called separated tuning, which treats large pretrained models as standalone feature extractors that run on powerful cloud servers. The fine-tuning carries out on devices which possess only low computational resources (slow CPU, no GPU, small memory, etc.) Existing methods that are potentially suitable for our separated tuning paradigm are discussed. But, three major drawbacks hinder their application in separated tuning: low adaptation capability, large adapter network, and in particular, high information transfer overhead. To address these issues, we propose Minimal Interaction Separated Tuning, or MIST, which reveals that the sum of intermediate features from pretrained models not only has minimal information transfer but also has high adaptation capability. With a lightweight attention-based adaptor network, MIST achieves information transfer efficiency, parameter efficiency, computational and memory efficiency, and at the same time demonstrates competitive results on various visual adaptation benchmarks.

Ningyuan Tang, Minghao Fu, Jianxin Wu• 2024

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet V2 (test)
Top-1 Accuracy66.5
181
Image ClassificationImageNet-A (test)--
154
Image ClassificationImageNet-Sketch (test)--
132
Image ClassificationImageNet-R (test)
Accuracy37.5
105
Visual Task AdaptationVTAB 1K
Average Accuracy76.7
78
Image ClassificationImageNet Source 1K (test)
Accuracy76.5
10
Showing 6 of 6 rows

Other info

Follow for update