Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Adversarial Contrastive Self-Supervised Learning

About

Recently, learning from vast unlabeled data, especially self-supervised learning, has been emerging and attracted widespread attention. Self-supervised learning followed by the supervised fine-tuning on a few labeled examples can significantly improve label efficiency and outperform standard supervised training using fully annotated data. In this work, we present a novel self-supervised deep learning paradigm based on online hard negative pair mining. Specifically, we design a student-teacher network to generate multi-view of the data for self-supervised learning and integrate hard negative pair mining into the training. Then we derive a new triplet-like loss considering both positive sample pairs and mined hard negative sample pairs. Extensive experiments demonstrate the effectiveness of the proposed method and its components on ILSVRC-2012.

Wentao Zhu, Hang Shang, Tingxun Lv, Chao Liao, Sen Yang, Ji Liu• 2022

Related benchmarks

TaskDatasetResultRank
Robust Image ClassificationCIFAR-10
Clean Accuracy78.14
68
Black-box Adversarial RobustnessCIFAR-10
Accuracy70.4
6
Adversarial Transfer LearningCIFAR-10 (test)
Clean Accuracy73.93
4
Image ClassificationCIFAR-100 linear evaluation
Clean Accuracy45.99
4
Image ClassificationCIFAR-10
PGD Robust Accuracy42.89
4
Showing 5 of 5 rows

Other info

Follow for update