Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Comparing Transfer and Meta Learning Approaches on a Unified Few-Shot Classification Benchmark

About

Meta and transfer learning are two successful families of approaches to few-shot learning. Despite highly related goals, state-of-the-art advances in each family are measured largely in isolation of each other. As a result of diverging evaluation norms, a direct or thorough comparison of different approaches is challenging. To bridge this gap, we perform a cross-family study of the best transfer and meta learners on both a large-scale meta-learning benchmark (Meta-Dataset, MD), and a transfer learning benchmark (Visual Task Adaptation Benchmark, VTAB). We find that, on average, large-scale transfer methods (Big Transfer, BiT) outperform competing approaches on MD, even when trained only on ImageNet. In contrast, meta-learning approaches struggle to compete on VTAB when trained and validated on MD. However, BiT is not without limitations, and pushing for scale does not improve performance on highly out-of-distribution MD tasks. In performing this study, we reveal a number of discrepancies in evaluation norms and study some of these in light of the performance gap. We hope that this work facilitates sharing of insights from each community, and accelerates progress on few-shot learning.

Vincent Dumoulin, Neil Houlsby, Utku Evci, Xiaohua Zhai, Ross Goroshin, Sylvain Gelly, Hugo Larochelle• 2021

Related benchmarks

TaskDatasetResultRank
Teachable Object RecognitionORBIT CLU-VE 1.0 (test)
Frame Accuracy65.6
21
Teachable Object RecognitionORBIT CLE-VE 1.0 (test)
Frame Accuracy81.4
21
Image ClassificationVTAB (Visual Task Adaptation Benchmark) (test)
Avg Accuracy64.3
13
Image ClassificationMeta-Dataset v2 (test)
Omniglot Accuracy90.9
8
Few-shot Image ClassificationMetaDataset v2 (test)
Accuracy71.3
5
Showing 5 of 5 rows

Other info

Follow for update