MDMMT: Multidomain Multimodal Transformer for Video Retrieval
About
We present a new state-of-the-art on the text to video retrieval task on MSRVTT and LSMDC benchmarks where our model outperforms all previous solutions by a large margin. Moreover, state-of-the-art results are achieved with a single model on two datasets without finetuning. This multidomain generalisation is achieved by a proper combination of different video caption datasets. We show that training on different datasets can improve test results of each other. Additionally we check intersection between many popular datasets and found that MSRVTT has a significant overlap between the test and the train parts, and the same situation is observed for ActivityNet.
Maksim Dzabraev, Maksim Kalashnikov, Stepan Komkov, Aleksandr Petiushko• 2021
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-to-Video Retrieval | MSR-VTT | Recall@138.9 | 313 | |
| Text-to-Video Retrieval | LSMDC (test) | R@118.8 | 225 | |
| Text-to-Video Retrieval | MSR-VTT (1k-A) | R@1079.7 | 211 | |
| Text-to-Video Retrieval | LSMDC | R@118.8 | 154 | |
| Text-to-Video Retrieval | MSR-VTT 1k-A (test) | R@138.9 | 57 | |
| Text-to-Video Retrieval | MSR-VTT (Full) | R@123.1 | 55 | |
| Text-to-Video Retrieval | MSR-VTT 9K (train) | R@138.9 | 12 | |
| Text-to-Video Retrieval | MSR-VTT 1K-A 9K (test) | Recall@138.9 | 10 |
Showing 8 of 8 rows