Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

When the Future Becomes the Past: Taming Temporal Correspondence for Self-supervised Video Representation Learning

About

The past decade has witnessed notable achievements in self-supervised learning for video tasks. Recent efforts typically adopt the Masked Video Modeling (MVM) paradigm, leading to significant progress on multiple video tasks. However, two critical challenges remain: 1) Without human annotations, the random temporal sampling introduces uncertainty, increasing the difficulty of model training. 2) Previous MVM methods primarily recover the masked patches in the pixel space, leading to insufficient information compression for downstream tasks. To address these challenges jointly, we propose a self-supervised framework that leverages Temporal Correspondence for video Representation learning (T-CoRe). For challenge 1), we propose a sandwich sampling strategy that selects two auxiliary frames to reduce reconstruction uncertainty in a two-side-squeezing manner. Addressing challenge 2), we introduce an auxiliary branch into a self-distillation architecture to restore representations in the latent space, generating high-level semantic representations enriched with temporal information. Experiments of T-CoRe consistently present superior performance across several downstream tasks, demonstrating its effectiveness for video representation learning. The code is available at https://github.com/yafeng19/T-CORE.

Yang Liu, Qianqian Xu, Peisong Wen, Siran Dai, Qingming Huang• 2025

Related benchmarks

TaskDatasetResultRank
Video Object SegmentationDAVIS 2017 (val)
J mean63.5
1130
Video Object SegmentationDAVIS 2017
Jaccard Index (J)64.6
42
Video Instance ParsingVIP (val)
mIoU39.7
20
Human Pose EstimationJHMDB (val)
PCK@0.147
19
Human Pose EstimationJHMDB
PCK@0.147.1
12
Video Part SegmentationVIP
mIoU0.389
6
Showing 6 of 6 rows

Other info

Code

Follow for update