Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

CLIP2TV: Align, Match and Distill for Video-Text Retrieval

About

Modern video-text retrieval frameworks basically consist of three parts: video encoder, text encoder and the similarity head. With the success on both visual and textual representation learning, transformer based encoders and fusion methods have also been adopted in the field of video-text retrieval. In this report, we present CLIP2TV, aiming at exploring where the critical elements lie in transformer based methods. To achieve this, We first revisit some recent works on multi-modal learning, then introduce some techniques into video-text retrieval, finally evaluate them through extensive experiments in different configurations. Notably, CLIP2TV achieves 52.9@R1 on MSR-VTT dataset, outperforming the previous SOTA result by 4.1%.

Zijian Gao, Jingyu Liu, Weiqi Sun, Sheng Chen, Dedan Chang, Lili Zhao• 2021

Related benchmarks

TaskDatasetResultRank
Text-to-Video RetrievalDiDeMo
R@10.455
459
Text-to-Video RetrievalDiDeMo (test)
R@145.5
399
Text-to-Video RetrievalMSVD
R@147
264
Text-to-Video RetrievalMSR-VTT (test)
R@149.3
255
Text-to-Video RetrievalActivityNet
R@10.441
238
Text-to-Video RetrievalMSR-VTT (1k-A)
R@1083.6
211
Text-to-Video RetrievalMSVD (test)
R@150.2
204
Video-to-Text retrievalMSR-VTT (1k-A)
Recall@577.4
74
Text-to-Video RetrievalMSR-VTT 1k-A (test)
R@148.3
57
Text-to-Video RetrievalActivityNet Captions
R@144.1
56
Showing 10 of 15 rows

Other info

Follow for update