Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Support-set bottlenecks for video-text representation learning

About

The dominant paradigm for learning video-text representations -- noise contrastive learning -- increases the similarity of the representations of pairs of samples that are known to be related, such as text and video from the same sample, and pushes away the representations of all other pairs. We posit that this last behaviour is too strict, enforcing dissimilar representations even for samples that are semantically-related -- for example, visually similar videos or ones that share the same depicted action. In this paper, we propose a novel method that alleviates this by leveraging a generative model to naturally push these related samples together: each sample's caption must be reconstructed as a weighted combination of other support samples' visual representations. This simple idea ensures that representations are not overly-specialized to individual samples, are reusable across the dataset, and results in representations that explicitly encode semantics shared between samples, unlike noise contrastive learning. Our proposed method outperforms others by a large margin on MSR-VTT, VATEX and ActivityNet, and MSVD for video-to-text and text-to-video retrieval.

Mandela Patrick, Po-Yao Huang, Yuki Asano, Florian Metze, Alexander Hauptmann, Jo\~ao Henriques, Andrea Vedaldi• 2020

Related benchmarks

TaskDatasetResultRank
Text-to-Video RetrievalMSR-VTT
Recall@130.1
313
Text-to-Video RetrievalMSR-VTT (test)
R@130.1
234
Text-to-Video RetrievalLSMDC (test)
R@115.1
225
Text-to-Video RetrievalMSVD
R@128.4
218
Text-to-Video RetrievalMSR-VTT (1k-A)
R@1069.3
211
Text-to-Video RetrievalMSVD (test)
R@128.4
204
Text-to-Video RetrievalActivityNet
R@10.292
197
Video-to-Text retrievalMSR-VTT
Recall@128.5
157
Text-to-Video RetrievalMSRVTT (test)
Recall@10.301
155
Text-to-Video RetrievalActivityNet (test)
R@129.2
108
Showing 10 of 60 rows

Other info

Follow for update