Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Multi-task Self-Supervised Visual Learning

About

We investigate methods for combining multiple self-supervised tasks--i.e., supervised tasks where data can be collected without manual labeling--in order to train a single visual representation. First, we provide an apples-to-apples comparison of four different self-supervised tasks using the very deep ResNet-101 architecture. We then combine tasks to jointly train a network. We also explore lasso regularization to encourage the network to factorize the information in its representation, and methods for "harmonizing" network inputs in order to learn a more unified representation. We evaluate all methods on ImageNet classification, PASCAL VOC detection, and NYU depth prediction. Our results show that deeper networks work better, and that combining tasks--even via a naive multi-head architecture--always improves performance. Our best joint network nearly matches the PASCAL performance of a model pre-trained on ImageNet classification, and matches the ImageNet network on NYU depth prediction.

Carl Doersch, Andrew Zisserman• 2017

Related benchmarks

TaskDatasetResultRank
Object DetectionCOCO 2017 (val)
AP32.7
2454
Image ClassificationImageNet-1k (val)
Top-1 Accuracy31.5
1453
Image ClassificationImageNet (val)--
1206
Object DetectionPASCAL VOC 2007 (test)
mAP70.5
821
Depth EstimationNYU v2 (test)
Threshold Accuracy (delta < 1.25)79.3
423
Image ClassificationImageNet (val)--
354
Image ClassificationImageNet--
55
Image ClassificationVTAB v2 (test)
Mean Accuracy59.2
39
Depth PredictionNYU Depth--
5
Showing 9 of 9 rows

Other info

Follow for update