Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Time-, Memory- and Parameter-Efficient Visual Adaptation

About

As foundation models become more popular, there is a growing need to efficiently finetune them for downstream tasks. Although numerous adaptation methods have been proposed, they are designed to be efficient only in terms of how many parameters are trained. They, however, typically still require backpropagating gradients throughout the model, meaning that their training-time and -memory cost does not reduce as significantly. We propose an adaptation method which does not backpropagate gradients through the backbone. We achieve this by designing a lightweight network in parallel that operates on features from the frozen, pretrained backbone. As a result, our method is efficient not only in terms of parameters, but also in training-time and memory usage. Our approach achieves state-of-the-art accuracy-parameter trade-offs on the popular VTAB benchmark, and we further show how we outperform prior works with respect to training-time and -memory usage too. We further demonstrate the training efficiency and scalability of our method by adapting a vision transformer backbone of 4 billion parameters for the computationally demanding task of video classification, without any intricate model parallelism. Here, we outperform a prior adaptor-based method which could only scale to a 1 billion parameter backbone, or fully-finetuning a smaller backbone, with the same GPU and less training time.

Otniel-Bogdan Mercea, Alexey Gritsenko, Cordelia Schmid, Anurag Arnab• 2024

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet (val)
Accuracy89
300
Image ClassificationVTAB 1K
Overall Mean Accuracy78.4
204
Image ClassificationiNaturalist 2018 (val)--
116
Image ClassificationVTAB-1K 1.0 (test)
Natural Accuracy82.8
102
Image ClassificationPlaces-365 (val)
Accuracy61.3
43
Image ClassificationiNaturalist 2021 (val)
Accuracy83.8
15
Showing 6 of 6 rows

Other info

Follow for update