Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

TrackMAE: Video Representation Learning via Track Mask and Predict

About

Masked video modeling (MVM) has emerged as a simple and scalable self-supervised pretraining paradigm, but only encodes motion information implicitly, limiting the encoding of temporal dynamics in the learned representations. As a result, such models struggle on motion-centric tasks that require fine-grained motion awareness. To address this, we propose TrackMAE, a simple masked video modeling paradigm that explicitly uses motion information as a reconstruction signal. In TrackMAE, we use an off-the-shelf point tracker to sparsely track points in the input videos, generating motion trajectories. Furthermore, we exploit the extracted trajectories to improve random tube masking with a motion-aware masking strategy. We enhance video representations learned in both pixel and feature semantic reconstruction spaces by providing a complementary supervision signal in the form of motion targets. We evaluate on six datasets across diverse downstream settings and find that TrackMAE consistently outperforms state-of-the-art video self-supervised learning baselines, learning more discriminative and generalizable representations. Code available at https://github.com/rvandeghen/TrackMAE

Renaud Vandeghen, Fida Mohammad Thoker, Marc Van Droogenbroeck, Bernard Ghanem• 2026

Related benchmarks

TaskDatasetResultRank
Action RecognitionSomething-Something v2 (val)
Top-1 Accuracy27.3
545
Action RecognitionKinetics-400
Top-1 Acc86.7
481
Action RecognitionSomething-Something v2
Top-1 Accuracy75.7
41
Video Representation GeneralizationSEVERE benchmark
Domain Shift (SSv2)72.8
18
Action RecognitionHMDB51 (val)--
17
Action RecognitionFineGym (val)
Top-1 Accuracy31.8
10
Showing 6 of 6 rows

Other info

Follow for update