Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning Audio-Video Modalities from Image Captions

About

A major challenge in text-video and text-audio retrieval is the lack of large-scale training data. This is unlike image-captioning, where datasets are in the order of millions of samples. To close this gap we propose a new video mining pipeline which involves transferring captions from image captioning datasets to video clips with no additional manual effort. Using this pipeline, we create a new large-scale, weakly labelled audio-video captioning dataset consisting of millions of paired clips and captions. We show that training a multimodal transformed based model on this data achieves competitive performance on video retrieval and video captioning, matching or even outperforming HowTo100M pretraining with 20x fewer clips. We also show that our mined clips are suitable for text-audio pretraining, and achieve state of the art results for the task of audio retrieval.

Arsha Nagrani, Paul Hongsuck Seo, Bryan Seybold, Anja Hauth, Santiago Manen, Chen Sun, Cordelia Schmid• 2022

Related benchmarks

TaskDatasetResultRank
Text-to-Video RetrievalMSR-VTT
Recall@119.4
313
Text-to-Video RetrievalMSR-VTT (1k-A)
R@1076.9
211
Text-to-Video RetrievalMSRVTT (test)
Recall@10.358
155
Text-to-Audio RetrievalAudioCaps (test)
Recall@110.6
145
Video CaptioningMSR-VTT (test)
CIDEr56
121
Text-to-Video RetrievalYouCook2 (val)
R@15.3
66
Text-to-Audio RetrievalClotho (test)
R@13
62
Audio RetrievalAudioCaps
R@135.5
42
Text-to-Video RetrievalMSR-VTT 1K (val)
R@133.9
38
Cross-modal retrievalClotho (test)
R@112.6
29
Showing 10 of 15 rows

Other info

Follow for update