Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty

About

Self-supervision provides effective representations for downstream tasks without requiring labels. However, existing approaches lag behind fully supervised training and are often not thought beneficial beyond obviating or reducing the need for annotations. We find that self-supervision can benefit robustness in a variety of ways, including robustness to adversarial examples, label corruption, and common input corruptions. Additionally, self-supervision greatly benefits out-of-distribution detection on difficult, near-distribution outliers, so much so that it exceeds the performance of fully supervised methods. These results demonstrate the promise of self-supervision for improving robustness and uncertainty estimation and establish these tasks as new axes of evaluation for future self-supervised learning research.

Dan Hendrycks, Mantas Mazeika, Saurav Kadavath, Dawn Song• 2019

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)--
3518
Image ClassificationCIFAR-10 (test)--
3381
Boundary DetectionBSDS 500 (test)
ODS67.7
185
Anomaly DetectionCIFAR-10
AUC95.6
120
Out-of-Distribution DetectionCIFAR-100
AUROC82.3
107
Out-of-Distribution DetectionCIFAR-10
AUROC99
105
Out-of-Distribution DetectionCIFAR-10 vs SVHN (test)
AUROC0.989
101
Out-of-Distribution DetectionCIFAR-10 vs CIFAR-100 (test)
AUROC90.9
93
Anomaly DetectionWBC
ROCAUC0.605
87
Out-of-Distribution DetectionCIFAR-10 in-distribution LSUN out-of-distribution (test)
AUROC93.2
73
Showing 10 of 78 rows
...

Other info

Code

Follow for update