Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LEEP: A New Measure to Evaluate Transferability of Learned Representations

About

We introduce a new measure to evaluate the transferability of representations learned by classifiers. Our measure, the Log Expected Empirical Prediction (LEEP), is simple and easy to compute: when given a classifier trained on a source data set, it only requires running the target data set through this classifier once. We analyze the properties of LEEP theoretically and demonstrate its effectiveness empirically. Our analysis shows that LEEP can predict the performance and convergence speed of both transfer and meta-transfer learning methods, even for small or imbalanced data. Moreover, LEEP outperforms recently proposed transferability measures such as negative conditional entropy and H scores. Notably, when transferring from ImageNet to CIFAR100, LEEP can achieve up to 30% improvement compared to the best competing method in terms of the correlations with actual transfer accuracy.

Cuong V. Nguyen, Tal Hassner, Matthias Seeger, Cedric Archambeau• 2020

Related benchmarks

TaskDatasetResultRank
Model SelectionDTD
Weighted Kendall's Tau0.486
46
Model SelectionCars
Weighted Kendall's Tau0.704
36
Model SelectionPets
Weighted Kendall's Tau0.68
36
Model SelectionCIFAR100
Weighted Kendall's Tau0.62
36
Model SelectionCIFAR10
Weighted Kendall's Tau0.601
36
Model SelectionSUN397
Weighted Kendall's Tau0.509
36
Model SelectionCaltech
Weighted Kendall's Tau0.605
24
Transferability EstimationCheckpoints (ResNet-101) evaluated on downstream tasks (Caltech101, Flower102, Patch-Camelyon, Sun397) Group IV
Recall@125
22
Image ClassificationCIFAR100 original (test)--
20
PTM SelectionAircraft
Kendall's tau_w0.244
19
Showing 10 of 58 rows

Other info

Follow for update