Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Self-supervised Pre-training with Hard Examples Improves Visual Representations

About

Self-supervised pre-training (SSP) employs random image transformations to generate training data for visual representation learning. In this paper, we first present a modeling framework that unifies existing SSP methods as learning to predict pseudo-labels. Then, we propose new data augmentation methods of generating training examples whose pseudo-labels are harder to predict than those generated via random image transformations. Specifically, we use adversarial training and CutMix to create hard examples (HEXA) to be used as augmented views for MoCo-v2 and DeepCluster-v2, leading to two variants HEXA_{MoCo} and HEXA_{DCluster}, respectively. In our experiments, we pre-train models on ImageNet and evaluate them on multiple public benchmarks. Our evaluation shows that the two new algorithm variants outperform their original counterparts, and achieve new state-of-the-art on a wide range of tasks where limited task supervision is available for fine-tuning. These results verify that hard examples are instrumental in improving the generalization of the pre-trained models.

Chunyuan Li, Xiujun Li, Lei Zhang, Baolin Peng, Mingyuan Zhou, Jianfeng Gao• 2020

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)--
3518
Image ClassificationImageNet-1k (val)
Top-1 Accuracy75.5
1453
Image ClassificationCIFAR-10 (test)--
906
Image ClassificationVOC 2007 (test)
mAP88.8
67
Image ClassificationImageNet 1% labels 1.0 (val)
Top-1 Acc0.573
33
Image ClassificationImageNet 10% labels 1.0 (val)
Top-1 Accuracy71.8
30
Image ClassificationImageNet 100% labels 1.0 (val)
Top-1 Acc78.6
17
Showing 7 of 7 rows

Other info

Follow for update