Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Momentum Contrast for Unsupervised Visual Representation Learning

About

We present Momentum Contrast (MoCo) for unsupervised visual representation learning. From a perspective on contrastive learning as dictionary look-up, we build a dynamic dictionary with a queue and a moving-averaged encoder. This enables building a large and consistent dictionary on-the-fly that facilitates contrastive unsupervised learning. MoCo provides competitive results under the common linear protocol on ImageNet classification. More importantly, the representations learned by MoCo transfer well to downstream tasks. MoCo can outperform its supervised pre-training counterpart in 7 detection/segmentation tasks on PASCAL VOC, COCO, and other datasets, sometimes surpassing it by large margins. This suggests that the gap between unsupervised and supervised representation learning has been largely closed in many vision tasks.

Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, Ross Girshick• 2019

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)
Accuracy56.1
3518
Image ClassificationCIFAR-10 (test)
Accuracy86.7
3381
Semantic segmentationADE20K (val)
mIoU36.7
2888
Object DetectionCOCO 2017 (val)
AP39.1
2643
Semantic segmentationPASCAL VOC 2012 (val)
Mean IoU77.32
2142
Image ClassificationImageNet-1k (val)
Top-1 Accuracy75.9
1469
Semantic segmentationPASCAL VOC 2012 (test)
mIoU72.5
1415
Image ClassificationImageNet (val)
Top-1 Acc68.6
1206
Instance SegmentationCOCO 2017 (val)
APm0.351
1201
Video Object SegmentationDAVIS 2017 (val)
J mean63.4
1193
Showing 10 of 313 rows
...

Other info

Code

Follow for update