Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Cluster-Wise Spatio-Temporal Masking for Efficient Video-Language Pretraining

About

Large-scale video-language pretraining enables strong generalization across multimodal tasks but often incurs prohibitive computational costs. Although recent advances in masked visual modeling help mitigate this issue, they still suffer from two fundamental limitations: severe visual information loss under high masking ratios and temporal information leakage caused by inter-frame correlations. To address these challenges, we propose ClusterSTM, a Cluster-Wise Spatio-Temporal Masking strategy for efficient video-language pretraining. ClusterSTM first performs intra-frame clustering to partition visual tokens into multiple semantically independent clusters, then conducts cluster-wise masking by retaining the token with the highest temporal density within each cluster. Our masking strategy ensure that the retained tokens capture holistic video content while exhibit strong temporal correlation. Additionally, we introduce a video-text relevance reconstruction objective that aligns high-level multimodal semantics beyond conventional visual reconstruction. Extensive experiments across multiple benchmarks demonstrate that ClusterSTM achieves superior performance on video-text retrieval, video question answering, and video captioning tasks, establishing a new state-of-the-art among efficient video-language models.

Weijun Zhuang, Yuqing Huang, Weikang Meng, Xin Li, Ming Liu, Xiaopeng Hong, Yaowei Wang, Wangmeng Zuo• 2026

Related benchmarks

TaskDatasetResultRank
Text-to-Video RetrievalDiDeMo
R@10.585
459
Text-to-Video RetrievalMSVD
R@140.3
264
Text-to-Video RetrievalActivityNet
R@10.549
238
Video CaptioningMSVD
CIDEr145.6
157
Text-to-Video RetrievalMSRVTT
R@149.7
116
Video CaptioningMSRVTT
CIDEr64.4
68
Text-to-Video RetrievalMSRVTT
Recall@131.2
59
Showing 7 of 7 rows

Other info

Follow for update