Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Unifying Event Detection and Captioning as Sequence Generation via Pre-Training

About

Dense video captioning aims to generate corresponding text descriptions for a series of events in the untrimmed video, which can be divided into two sub-tasks, event detection and event captioning. Unlike previous works that tackle the two sub-tasks separately, recent works have focused on enhancing the inter-task association between the two sub-tasks. However, designing inter-task interactions for event detection and captioning is not trivial due to the large differences in their task specific solutions. Besides, previous event detection methods normally ignore temporal dependencies between events, leading to event redundancy or inconsistency problems. To tackle above the two defects, in this paper, we define event detection as a sequence generation task and propose a unified pre-training and fine-tuning framework to naturally enhance the inter-task association between event detection and captioning. Since the model predicts each event with previous events as context, the inter-dependency between events is fully exploited and thus our model can detect more diverse and consistent events in the video. Experiments on the ActivityNet dataset show that our model outperforms the state-of-the-art methods, and can be further boosted when pre-trained on extra large-scale video-text data. Code is available at \url{https://github.com/QiQAng/UEDVC}.

Qi Zhang, Yuqing Song, Qin Jin• 2022

Related benchmarks

TaskDatasetResultRank
Temporal Action LocalizationActivityNet 1.3 (val)--
257
Dense Video CaptioningActivityNet Captions (val)--
54
Dense Video CaptioningActivityNet Captions
METEOR7.33
43
Dense Video CaptioningYouCook2 (val)
METEOR2.18
19
Dense Video CaptioningActivityNet (val)--
16
Event Proposal GenerationActivityNet Captions (val)
Recall Avg59
13
Event CaptioningActivityNet Captions v1.3 (test)--
5
Showing 7 of 7 rows

Other info

Follow for update