Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

End-to-end Dense Video Captioning as Sequence Generation

About

Dense video captioning aims to identify the events of interest in an input video, and generate descriptive captions for each event. Previous approaches usually follow a two-stage generative process, which first proposes a segment for each event, then renders a caption for each identified segment. Recent advances in large-scale sequence generation pretraining have seen great success in unifying task formulation for a great variety of tasks, but so far, more complex tasks such as dense video captioning are not able to fully utilize this powerful paradigm. In this work, we show how to model the two subtasks of dense video captioning jointly as one sequence generation task, and simultaneously predict the events and the corresponding descriptions. Experiments on YouCook2 and ViTT show encouraging results and indicate the feasibility of training complex tasks such as end-to-end dense video captioning integrated into large-scale pretrained models.

Wanrong Zhu, Bo Pang, Ashish V. Thapliyal, William Yang Wang, Radu Soricut• 2022

Related benchmarks

TaskDatasetResultRank
Dense Video CaptioningYouCook2 (val)
SODA_c20
36
Event localizationYouCook2 (val)
Recall20.7
24
Event CaptioningYouCook2 1.0 (val)
METEOR3.5
12
Event localizationViTT (test)
Recall0.322
8
Dense Video CaptioningViTT (test)
SODA_c25
7
Event CaptioningYouCook2
METEOR3.5
6
Showing 6 of 6 rows

Other info

Follow for update