Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

TEACHTEXT: CrossModal Generalized Distillation for Text-Video Retrieval

About

In recent years, considerable progress on the task of text-video retrieval has been achieved by leveraging large-scale pretraining on visual and audio datasets to construct powerful video encoders. By contrast, despite the natural symmetry, the design of effective algorithms for exploiting large-scale language pretraining remains under-explored. In this work, we are the first to investigate the design of such algorithms and propose a novel generalized distillation method, TeachText, which leverages complementary cues from multiple text encoders to provide an enhanced supervisory signal to the retrieval model. Moreover, we extend our method to video side modalities and show that we can effectively reduce the number of used modalities at test time without compromising performance. Our approach advances the state of the art on several video retrieval benchmarks by a significant margin and adds no computational overhead at test time. Last but not least, we show an effective application of our method for eliminating noise from retrieval datasets. Code and data can be found at https://www.robots.ox.ac.uk/~vgg/research/teachtext/.

Ioana Croitoru, Simion-Vlad Bogolin, Marius Leordeanu, Hailin Jin, Andrew Zisserman, Samuel Albanie, Yang Liu• 2021

Related benchmarks

TaskDatasetResultRank
Text-to-Video RetrievalDiDeMo (test)
R@134.6
376
Text-to-Video RetrievalDiDeMo
R@10.216
360
Text-to-Video RetrievalMSR-VTT
Recall@129.6
313
Text-to-Video RetrievalMSR-VTT (test)
R@129.6
234
Text-to-Video RetrievalLSMDC (test)
R@117.2
225
Text-to-Video RetrievalMSVD
R@125.4
218
Text-to-Video RetrievalMSR-VTT (1k-A)
R@1074.2
211
Text-to-Video RetrievalMSVD (test)
R@125.4
204
Text-to-Video RetrievalActivityNet
R@10.235
197
Video-to-Text retrievalMSR-VTT
Recall@132.1
157
Showing 10 of 46 rows

Other info

Follow for update