A Large-Scale Study on Unsupervised Spatiotemporal Representation Learning
About
We present a large-scale study on unsupervised spatiotemporal representation learning from videos. With a unified perspective on four recent image-based frameworks, we study a simple objective that can easily generalize all these methods to space-time. Our objective encourages temporally-persistent features in the same video, and in spite of its simplicity, it works surprisingly well across: (i) different unsupervised frameworks, (ii) pre-training datasets, (iii) downstream datasets, and (iv) backbone architectures. We draw a series of intriguing observations from this study, e.g., we discover that encouraging long-spanned persistency can be effective even if the timespan is 60 seconds. In addition to state-of-the-art results in multiple benchmarks, we report a few promising cases in which unsupervised pre-training can outperform its supervised counterpart. Code is made available at https://github.com/facebookresearch/SlowFast
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Action Recognition | Something-Something v2 (val) | Top-1 Accuracy55.8 | 535 | |
| Action Recognition | UCF101 | Accuracy94.2 | 365 | |
| Action Recognition | UCF101 (mean of 3 splits) | Accuracy96.3 | 357 | |
| Action Recognition | Something-Something v2 (test) | -- | 333 | |
| Action Recognition | UCF101 (test) | Accuracy96.3 | 307 | |
| Action Recognition | HMDB51 (test) | Accuracy0.721 | 249 | |
| Action Recognition | Kinetics 400 (test) | -- | 245 | |
| Video Classification | Kinetics 400 (val) | -- | 204 | |
| Video Action Recognition | Kinetics-400 | Top-1 Acc71.5 | 184 | |
| Video Classification | Something-Something v2 (test) | Top-1 Acc0.558 | 169 |