A linearized framework and a new benchmark for model selection for fine-tuning
About
Fine-tuning from a collection of models pre-trained on different domains (a "model zoo") is emerging as a technique to improve test accuracy in the low-data regime. However, model selection, i.e. how to pre-select the right model to fine-tune from a model zoo without performing any training, remains an open topic. We use a linearized framework to approximate fine-tuning, and introduce two new baselines for model selection -- Label-Gradient and Label-Feature Correlation. Since all model selection algorithms in the literature have been tested on different use-cases and never compared directly, we introduce a new comprehensive benchmark for model selection comprising of: i) A model zoo of single and multi-domain models, and ii) Many target tasks. Our benchmark highlights accuracy gain with model zoo compared to fine-tuning Imagenet models. We show our model selection baseline can select optimal models to fine-tune in few selections and has the highest ranking correlation to fine-tuning accuracy compared to existing algorithms.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Model Selection | DTD | Weighted Kendall's Tau-0.669 | 46 | |
| Model Selection | CIFAR100 | Weighted Kendall's Tau0.418 | 36 | |
| Model Selection | CIFAR10 | Weighted Kendall's Tau0.346 | 36 | |
| Model Selection | Cars | Weighted Kendall's Tau0.243 | 36 | |
| Model Selection | Pets | Weighted Kendall's Tau0.215 | 36 | |
| Model Selection | SUN397 | Weighted Kendall's Tau-0.151 | 36 | |
| PTM Selection | Aircraft | Kendall's tau_w0.279 | 19 | |
| PTM Selection | Caltech101 | Kendall's weighted tau-0.165 | 19 |