Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A Large-Scale Study on Unsupervised Spatiotemporal Representation Learning

About

We present a large-scale study on unsupervised spatiotemporal representation learning from videos. With a unified perspective on four recent image-based frameworks, we study a simple objective that can easily generalize all these methods to space-time. Our objective encourages temporally-persistent features in the same video, and in spite of its simplicity, it works surprisingly well across: (i) different unsupervised frameworks, (ii) pre-training datasets, (iii) downstream datasets, and (iv) backbone architectures. We draw a series of intriguing observations from this study, e.g., we discover that encouraging long-spanned persistency can be effective even if the timespan is 60 seconds. In addition to state-of-the-art results in multiple benchmarks, we report a few promising cases in which unsupervised pre-training can outperform its supervised counterpart. Code is made available at https://github.com/facebookresearch/SlowFast

Christoph Feichtenhofer, Haoqi Fan, Bo Xiong, Ross Girshick, Kaiming He• 2021

Related benchmarks

TaskDatasetResultRank
Action RecognitionSomething-Something v2 (val)
Top-1 Accuracy55.8
535
Action RecognitionUCF101
Accuracy94.2
365
Action RecognitionUCF101 (mean of 3 splits)
Accuracy96.3
357
Action RecognitionSomething-Something v2 (test)--
333
Action RecognitionUCF101 (test)
Accuracy96.3
307
Action RecognitionHMDB51 (test)
Accuracy0.721
249
Action RecognitionKinetics 400 (test)--
245
Video ClassificationKinetics 400 (val)--
204
Video Action RecognitionKinetics-400
Top-1 Acc71.5
184
Video ClassificationSomething-Something v2 (test)
Top-1 Acc0.558
169
Showing 10 of 35 rows

Other info

Code

Follow for update