Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Self-supervised Label Augmentation via Input Transformations

About

Self-supervised learning, which learns by constructing artificial labels given only the input signals, has recently gained considerable attention for learning representations with unlabeled datasets, i.e., learning without any human-annotated supervision. In this paper, we show that such a technique can be used to significantly improve the model accuracy even under fully-labeled datasets. Our scheme trains the model to learn both original and self-supervised tasks, but is different from conventional multi-task learning frameworks that optimize the summation of their corresponding losses. Our main idea is to learn a single unified task with respect to the joint distribution of the original and self-supervised labels, i.e., we augment original labels via self-supervision of input transformation. This simple, yet effective approach allows to train models easier by relaxing a certain invariant constraint during learning the original and self-supervised tasks simultaneously. It also enables an aggregated inference which combines the predictions from different augmentations to improve the prediction accuracy. Furthermore, we propose a novel knowledge transfer technique, which we refer to as self-distillation, that has the effect of the aggregated inference in a single (faster) inference. We demonstrate the large accuracy improvement and wide applicability of our framework on various fully-supervised settings, e.g., the few-shot and imbalanced classification scenarios.

Hankook Lee, Sung Ju Hwang, Jinwoo Shin• 2019

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-10 Long-tailed (val)
Top-1 Acc89.58
82
Image ClassificationCIFAR-100 Long-tailed (val)
Top-1 Accuracy (Overall)59.89
82
5-way Image ClassificationMini-Imagenet (test)
Top-1 Acc79.63
46
Image ClassificationImageNet
Top-1 Acc76.17
33
5-way 1-shot ClassificationImageNet mini
Top-1 Accuracy (ACC_1)62.93
31
Out-of-Distribution DetectionCIFAR-10 SVHN in-distribution out-of-distribution standard (test)
AUROC89.1
31
Out-of-Distribution DetectionLSUN (Out-of-distribution) vs CIFAR-10 (In-distribution)
AUROC90.7
28
Out-of-Distribution DetectionCIFAR-10 in-dist ImageNet out-dist
AUROC0.898
28
5-way 5-shot ClassificationMini-ImageNet
Mean Accuracy79.63
27
Out-of-Distribution DetectionCIFAR-10 (in-dist) CIFAR-100 (out-dist)
AUROC0.836
10
Showing 10 of 11 rows

Other info

Follow for update