CLIP-ViP: Adapting Pre-trained Image-Text Model to Video-Language Representation Alignment
About
The pre-trained image-text models, like CLIP, have demonstrated the strong power of vision-language representation learned from a large scale of web-collected image-text data. In light of the well-learned visual features, some existing works transfer image representation to video domain and achieve good results. However, how to utilize image-language pre-trained model (e.g., CLIP) for video-language pre-training (post-pretraining) is still under explored. In this paper, we investigate two questions: 1) what are the factors hindering post-pretraining CLIP to further improve the performance on video-language tasks? and 2) how to mitigate the impact of these factors? Through a series of comparative experiments and analyses, we find that the data scale and domain gap between language sources have great impacts. Motivated by these, we propose a Omnisource Cross-modal Learning method equipped with a Video Proxy mechanism on the basis of CLIP, namely CLIP-ViP. Extensive results show that our approach improves the performance of CLIP on video-text retrieval by a large margin. Our model also achieves SOTA results on a variety of datasets, including MSR-VTT, DiDeMo, LSMDC, and ActivityNet. We will release our code and pre-trained CLIP-ViP models at https://github.com/microsoft/XPretrain/tree/main/CLIP-ViP.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-to-Video Retrieval | DiDeMo (test) | R@155.3 | 376 | |
| Text-to-Video Retrieval | DiDeMo | R@10.557 | 360 | |
| Text-to-Video Retrieval | MSR-VTT | Recall@154.2 | 313 | |
| Text-to-Video Retrieval | MSR-VTT (test) | R@157.7 | 234 | |
| Text-to-Video Retrieval | LSMDC (test) | R@130.7 | 225 | |
| Text-to-Video Retrieval | ActivityNet | R@153.4 | 197 | |
| Text-to-Video Retrieval | MSRVTT (test) | Recall@10.577 | 155 | |
| Text-to-Video Retrieval | LSMDC | R@129.4 | 154 | |
| Text-to-Video Retrieval | ActivityNet (test) | R@161.4 | 108 | |
| Video-to-Text retrieval | DiDeMo | R@146.3 | 108 |