Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Self-Supervised Learning of Video-Induced Visual Invariances

About

We propose a general framework for self-supervised learning of transferable visual representations based on Video-Induced Visual Invariances (VIVI). We consider the implicit hierarchy present in the videos and make use of (i) frame-level invariances (e.g. stability to color and contrast perturbations), (ii) shot/clip-level invariances (e.g. robustness to changes in object orientation and lighting conditions), and (iii) video-level invariances (semantic relationships of scenes across shots/clips), to define a holistic self-supervised loss. Training models using different variants of the proposed framework on videos from the YouTube-8M (YT8M) data set, we obtain state-of-the-art self-supervised transfer learning results on the 19 diverse downstream tasks of the Visual Task Adaptation Benchmark (VTAB), using only 1000 labels per task. We then show how to co-train our models jointly with labeled images, outperforming an ImageNet-pretrained ResNet-50 by 0.8 points with 10x fewer labeled images, as well as the previous best supervised model by 3.7 points using the full ImageNet data set.

Michael Tschannen, Josip Djolonga, Marvin Ritter, Aravindh Mahendran, Xiaohua Zhai, Neil Houlsby, Sylvain Gelly, Mario Lucic• 2019

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K (val)
mIoU34.2
2731
Object DetectionCOCO 2017 (val)
AP36.5
2454
Semantic segmentationADE20K
mIoU34.2
936
Object DetectionCOCO (val)
mAP41.3
613
Object DetectionLVIS (val)
mAP23.2
141
Object DetectionCOCO
mAP41.3
107
Image ClassificationVTAB v2 (test)
Mean Accuracy70.4
39
Visual Task AdaptationVTAB-1k v1 (test)
Mean Accuracy71.7
29
Semantic segmentationPASCAL (train)
mIoU65.8
11
Out-of-Distribution RecognitionImageNet A
Top-1 Accuracy0.5
8
Showing 10 of 12 rows

Other info

Code

Follow for update