Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Streaming Dense Video Captioning

About

An ideal model for dense video captioning -- predicting captions localized temporally in a video -- should be able to handle long input videos, predict rich, detailed textual descriptions, and be able to produce outputs before processing the entire video. Current state-of-the-art models, however, process a fixed number of downsampled frames, and make a single full prediction after seeing the whole video. We propose a streaming dense video captioning model that consists of two novel components: First, we propose a new memory module, based on clustering incoming tokens, which can handle arbitrarily long videos as the memory is of a fixed size. Second, we develop a streaming decoding algorithm that enables our model to make predictions before the entire video has been processed. Our model achieves this streaming ability, and significantly improves the state-of-the-art on three dense video captioning benchmarks: ActivityNet, YouCook2 and ViTT. Our code is released at https://github.com/google-research/scenic.

Xingyi Zhou, Anurag Arnab, Shyamal Buch, Shen Yan, Austin Myers, Xuehan Xiong, Arsha Nagrani, Cordelia Schmid• 2024

Related benchmarks

TaskDatasetResultRank
Video CaptioningActivityNet Captions (val)
METEOR10
22
Video Level SummarizationYouCook2
METEOR7.1
21
Event localizationYouCook2 (val)--
13
Event CaptioningYouCook2 1.0 (val)
METEOR7.1
12
Event localizationViTT (test)--
4
Event CaptioningViTT (test)
CIDEr25.2
3
Showing 6 of 6 rows

Other info

Follow for update