Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A linearized framework and a new benchmark for model selection for fine-tuning

About

Fine-tuning from a collection of models pre-trained on different domains (a "model zoo") is emerging as a technique to improve test accuracy in the low-data regime. However, model selection, i.e. how to pre-select the right model to fine-tune from a model zoo without performing any training, remains an open topic. We use a linearized framework to approximate fine-tuning, and introduce two new baselines for model selection -- Label-Gradient and Label-Feature Correlation. Since all model selection algorithms in the literature have been tested on different use-cases and never compared directly, we introduce a new comprehensive benchmark for model selection comprising of: i) A model zoo of single and multi-domain models, and ii) Many target tasks. Our benchmark highlights accuracy gain with model zoo compared to fine-tuning Imagenet models. We show our model selection baseline can select optimal models to fine-tune in few selections and has the highest ranking correlation to fine-tuning accuracy compared to existing algorithms.

Aditya Deshpande, Alessandro Achille, Avinash Ravichandran, Hao Li, Luca Zancato, Charless Fowlkes, Rahul Bhotika, Stefano Soatto, Pietro Perona• 2021

Related benchmarks

TaskDatasetResultRank
Model SelectionDTD
Weighted Kendall's Tau-0.669
46
Model SelectionCIFAR100
Weighted Kendall's Tau0.418
36
Model SelectionCIFAR10
Weighted Kendall's Tau0.346
36
Model SelectionCars
Weighted Kendall's Tau0.243
36
Model SelectionPets
Weighted Kendall's Tau0.215
36
Model SelectionSUN397
Weighted Kendall's Tau-0.151
36
PTM SelectionAircraft
Kendall's tau_w0.279
19
PTM SelectionCaltech101
Kendall's weighted tau-0.165
19
Showing 8 of 8 rows

Other info

Follow for update