Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Pareto Self-Supervised Training for Few-Shot Learning

About

While few-shot learning (FSL) aims for rapid generalization to new concepts with little supervision, self-supervised learning (SSL) constructs supervisory signals directly computed from unlabeled data. Exploiting the complementarity of these two manners, few-shot auxiliary learning has recently drawn much attention to deal with few labeled data. Previous works benefit from sharing inductive bias between the main task (FSL) and auxiliary tasks (SSL), where the shared parameters of tasks are optimized by minimizing a linear combination of task losses. However, it is challenging to select a proper weight to balance tasks and reduce task conflict. To handle the problem as a whole, we propose a novel approach named as Pareto self-supervised training (PSST) for FSL. PSST explicitly decomposes the few-shot auxiliary problem into multiple constrained multi-objective subproblems with different trade-off preferences, and here a preference region in which the main task achieves the best performance is identified. Then, an effective preferred Pareto exploration is proposed to find a set of optimal solutions in such a preference region. Extensive experiments on several public benchmark datasets validate the effectiveness of our approach by achieving state-of-the-art performance.

Zhengyu Chen, Jixie Ge, Heshen Zhan, Siteng Huang, Donglin Wang• 2021

Related benchmarks

TaskDatasetResultRank
Few-shot Image ClassificationMini-Imagenet (test)--
235
5-way Few-shot ClassificationMiniImagenet
Accuracy (5-shot)80.64
150
Few-shot Image ClassificationminiImageNet (test)--
111
Few-shot Image ClassificationCIFAR-FS 5-way (test)
Top-1 Acc (1-shot)77.02
18
Image ClassificationCIFAR-FS 5-shot
Accuracy88.45
17
Image ClassificationCIFAR-FS 1-shot
Accuracy77.02
11
Showing 6 of 6 rows

Other info

Follow for update