Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Self-training for Few-shot Transfer Across Extreme Task Differences

About

Most few-shot learning techniques are pre-trained on a large, labeled "base dataset". In problem domains where such large labeled datasets are not available for pre-training (e.g., X-ray, satellite images), one must resort to pre-training in a different "source" problem domain (e.g., ImageNet), which can be very different from the desired target task. Traditional few-shot and transfer learning techniques fail in the presence of such extreme differences between the source and target tasks. In this paper, we present a simple and effective solution to tackle this extreme domain gap: self-training a source domain representation on unlabeled data from the target domain. We show that this improves one-shot performance on the target domain by 2.9 points on average on the challenging BSCD-FSL benchmark consisting of datasets from multiple domains. Our code is available at https://github.com/cpphoo/STARTUP.

Cheng Perng Phoo, Bharath Hariharan• 2020

Related benchmarks

TaskDatasetResultRank
Few-shot Image ClassificationISIC (test)--
36
Cross-domain few-shot classificationCD-FSL benchmark
Mean Accuracy36.91
33
Cross-domain few-shot classificationCropDisease
Accuracy98.45
27
Cross-domain few-shot classificationISIC
Accuracy64.16
27
Cross-domain few-shot classificationEuroSAT
Accuracy0.9199
27
Action RecognitionCDFSAR HMDB, SSV2, Diving, UCF, RareAct
HMDB Accuracy44.71
22
Few-shot Image ClassificationEuroSAT (test)
1-Shot Accuracy63.88
18
Cross-Domain Few-Shot Action RecognitionCDFSAR (HMDB, SSV2, Diving, UCF, RareAct) (test)
Accuracy (HMDB)30.48
14
Showing 8 of 8 rows

Other info

Follow for update