Adversarial Contrastive Self-Supervised Learning
About
Recently, learning from vast unlabeled data, especially self-supervised learning, has been emerging and attracted widespread attention. Self-supervised learning followed by the supervised fine-tuning on a few labeled examples can significantly improve label efficiency and outperform standard supervised training using fully annotated data. In this work, we present a novel self-supervised deep learning paradigm based on online hard negative pair mining. Specifically, we design a student-teacher network to generate multi-view of the data for self-supervised learning and integrate hard negative pair mining into the training. Then we derive a new triplet-like loss considering both positive sample pairs and mined hard negative sample pairs. Extensive experiments demonstrate the effectiveness of the proposed method and its components on ILSVRC-2012.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Robust Image Classification | CIFAR-10 | Clean Accuracy78.14 | 68 | |
| Black-box Adversarial Robustness | CIFAR-10 | Accuracy70.4 | 6 | |
| Adversarial Transfer Learning | CIFAR-10 (test) | Clean Accuracy73.93 | 4 | |
| Image Classification | CIFAR-100 linear evaluation | Clean Accuracy45.99 | 4 | |
| Image Classification | CIFAR-10 | PGD Robust Accuracy42.89 | 4 |