Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Scalable Diverse Model Selection for Accessible Transfer Learning

About

With the preponderance of pretrained deep learning models available off-the-shelf from model banks today, finding the best weights to fine-tune to your use-case can be a daunting task. Several methods have recently been proposed to find good models for transfer learning, but they either don't scale well to large model banks or don't perform well on the diversity of off-the-shelf models. Ideally the question we want to answer is, "given some data and a source model, can you quickly predict the model's accuracy after fine-tuning?" In this paper, we formalize this setting as "Scalable Diverse Model Selection" and propose several benchmarks for evaluating on this task. We find that existing model selection and transferability estimation methods perform poorly here and analyze why this is the case. We then introduce simple techniques to improve the performance and speed of these algorithms. Finally, we iterate on existing methods to create PARC, which outperforms all other methods on diverse model selection. We have released the benchmarks and method code in hope to inspire future work in model selection for accessible transfer learning.

Daniel Bolya, Rohit Mittapalli, Judy Hoffman• 2021

Related benchmarks

TaskDatasetResultRank
Image ClassificationEuroSAT
Accuracy34.43
497
Image ClassificationFood-101
Accuracy35.16
494
Image ClassificationImageNet-A (test)--
154
Image ClassificationImageNet-Sketch (test)--
132
Image ClassificationImageNet-R (test)
Accuracy11.85
105
Image ClassificationiNaturalist
Accuracy0.07
51
Model SelectionDTD
Weighted Kendall's Tau0.536
46
Image ClassificationObjectNet (test)--
43
Model SelectionPets
Weighted Kendall's Tau0.496
36
Model SelectionCars
Weighted Kendall's Tau0.424
36
Showing 10 of 18 rows

Other info

Follow for update