Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Ranking Neural Checkpoints

About

This paper is concerned with ranking many pre-trained deep neural networks (DNNs), called checkpoints, for the transfer learning to a downstream task. Thanks to the broad use of DNNs, we may easily collect hundreds of checkpoints from various sources. Which of them transfers the best to our downstream task of interest? Striving to answer this question thoroughly, we establish a neural checkpoint ranking benchmark (NeuCRaB) and study some intuitive ranking measures. These measures are generic, applying to the checkpoints of different output types without knowing how the checkpoints are pre-trained on which dataset. They also incur low computation cost, making them practically meaningful. Our results suggest that the linear separability of the features extracted by the checkpoints is a strong indicator of transferability. We also arrive at a new ranking measure, NLEEP, which gives rise to the best performance in the experiments.

Yandong Li, Xuhui Jia, Ruoxin Sang, Yukun Zhu, Bradley Green, Liqiang Wang, Boqing Gong• 2020

Related benchmarks

TaskDatasetResultRank
Image ClassificationFood-101
Accuracy87.37
494
Image ClassificationStanford Cars
Accuracy91.76
477
Image ClassificationSUN397
Accuracy66.95
425
Image ClassificationCaltech-101
Accuracy94.12
198
Image ClassificationFGVC Aircraft--
185
Image ClassificationOxford Flowers 102
Accuracy97.65
172
Image ClassificationOxford-IIIT Pet
Accuracy94.71
161
Image ClassificationCIFAR-10
Accuracy98.02
74
Model SelectionDTD
Weighted Kendall's Tau0.777
46
Image ClassificationDescribable Textures
Accuracy78.14
41
Showing 10 of 50 rows

Other info

Code

Follow for update