Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Self-Supervised Models are Continual Learners

About

Self-supervised models have been shown to produce comparable or better visual representations than their supervised counterparts when trained offline on unlabeled data at scale. However, their efficacy is catastrophically reduced in a Continual Learning (CL) scenario where data is presented to the model sequentially. In this paper, we show that self-supervised loss functions can be seamlessly converted into distillation mechanisms for CL by adding a predictor network that maps the current state of the representations to their past state. This enables us to devise a framework for Continual self-supervised visual representation Learning that (i) significantly improves the quality of the learned representations, (ii) is compatible with several state-of-the-art self-supervised objectives, and (iii) needs little to no hyperparameter tuning. We demonstrate the effectiveness of our approach empirically by training six popular self-supervised models in various CL settings.

Enrico Fini, Victor G. Turrisi da Costa, Xavier Alameda-Pineda, Elisa Ricci, Karteek Alahari, Julien Mairal• 2021

Related benchmarks

TaskDatasetResultRank
Medical Image SegmentationLA
Dice89.66
97
Medical Image SegmentationGLAS
Dice89.12
28
Domain-incremental learningDomainNet (test)
Average Accuracy50.9
25
Medical Image SegmentationLiTS
Dice Score67.04
23
Medical Image ClassificationNCH
Accuracy95.01
14
Medical Image Analysis AggregationNine Medical Tasks Average
Average Score86.88
14
Medical Image ClassificationChestXR
Accuracy91.66
14
Medical Image ClassificationPudMed20k
ACC82.93
14
Medical Image ClassificationRICORD
Accuracy76.59
14
Medical Image SegmentationVS
DSC87.94
14
Showing 10 of 10 rows

Other info

Follow for update