Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Vid2Seq: Large-Scale Pretraining of a Visual Language Model for Dense Video Captioning

About

In this work, we introduce Vid2Seq, a multi-modal single-stage dense event captioning model pretrained on narrated videos which are readily-available at scale. The Vid2Seq architecture augments a language model with special time tokens, allowing it to seamlessly predict event boundaries and textual descriptions in the same output sequence. Such a unified model requires large-scale training data, which is not available in current annotated datasets. We show that it is possible to leverage unlabeled narrated videos for dense video captioning, by reformulating sentence boundaries of transcribed speech as pseudo event boundaries, and using the transcribed speech sentences as pseudo event captions. The resulting Vid2Seq model pretrained on the YT-Temporal-1B dataset improves the state of the art on a variety of dense video captioning benchmarks including YouCook2, ViTT and ActivityNet Captions. Vid2Seq also generalizes well to the tasks of video paragraph captioning and video clip captioning, and to few-shot settings. Our code is publicly available at https://antoyang.github.io/vid2seq.html.

Antoine Yang, Arsha Nagrani, Paul Hongsuck Seo, Antoine Miech, Jordi Pont-Tuset, Ivan Laptev, Josef Sivic, Cordelia Schmid• 2023

Related benchmarks

TaskDatasetResultRank
Video Question AnsweringMSRVTT-QA (test)
Accuracy44.8
371
Text-to-Video RetrievalDiDeMo
R@10.576
360
Video Question AnsweringMSVD-QA (test)
Accuracy53.1
274
Temporal Action LocalizationActivityNet 1.3 (val)--
257
Text-to-Video RetrievalActivityNet
R@10.551
197
Video CaptioningMSVD
CIDEr146.2
128
Video CaptioningMSR-VTT (test)
CIDEr64.6
121
Video CaptioningMSVD (test)
CIDEr146.2
111
Video-to-Text retrievalDiDeMo
R@147.2
108
Video CaptioningMSRVTT
CIDEr64.6
101
Showing 10 of 49 rows

Other info

Code

Follow for update