Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Object-aware Video-language Pre-training for Retrieval

About

Recently, by introducing large-scale dataset and strong transformer network, video-language pre-training has shown great success especially for retrieval. Yet, existing video-language transformer models do not explicitly fine-grained semantic align. In this work, we present Object-aware Transformers, an object-centric approach that extends video-language transformer to incorporate object representations. The key idea is to leverage the bounding boxes and object tags to guide the training process. We evaluate our model on three standard sub-tasks of video-text matching on four widely used benchmarks. We also provide deep analysis and detailed ablation about the proposed method. We show clear improvement in performance across all tasks and datasets considered, demonstrating the value of a model that incorporates object representations into a video-language architecture. The code will be released at \url{https://github.com/FingerRec/OA-Transformer}.

Alex Jinpeng Wang, Yixiao Ge, Guanyu Cai, Rui Yan, Xudong Lin, Ying Shan, Xiaohu Qie, Mike Zheng Shou• 2021

Related benchmarks

TaskDatasetResultRank
Text-to-Video RetrievalDiDeMo (test)
R@134.8
376
Text-to-Video RetrievalLSMDC (test)
R@118.2
225
Text-to-Video RetrievalMSVD (test)
R@151.4
204
Text-to-Video RetrievalMSRVTT (test)
Recall@10.358
155
Text-to-Video RetrievalMSRVTT
R@140.9
75
Temporal GroundingCharades-STA (test)
Recall@1 (IoU=0.5)39.2
68
Text-to-Video RetrievalMSRVTT 1k (test)
Recall@1055.6
63
Text-to-Video RetrievalMSRVTT 1K 1.0 (test)
R@140.9
23
Video GroundingActivityNet-Captions (test)
R@1 (IoU=0.5)43.6
15
Text-to-Video RetrievalDiDeMo 28s (test)
R@134.8
11
Showing 10 of 11 rows

Other info

Code

Follow for update