Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Joint Event Detection and Description in Continuous Video Streams

About

Dense video captioning is a fine-grained video understanding task that involves two sub-problems: localizing distinct events in a long video stream, and generating captions for the localized events. We propose the Joint Event Detection and Description Network (JEDDi-Net), which solves the dense video captioning task in an end-to-end fashion. Our model continuously encodes the input video stream with three-dimensional convolutional layers, proposes variable-length temporal events based on pooled features, and generates their captions. Proposal features are extracted within each proposal segment through 3D Segment-of-Interest pooling from shared video feature encoding. In order to explicitly model temporal relationships between visual events and their captions in a single video, we also propose a two-level hierarchical captioning module that keeps track of context. On the large-scale ActivityNet Captions dataset, JEDDi-Net demonstrates improved results as measured by standard metrics. We also present the first dense captioning results on the TACoS-MultiLevel dataset.

Huijuan Xu, Boyang Li, Vasili Ramanishka, Leonid Sigal, Kate Saenko• 2018

Related benchmarks

TaskDatasetResultRank
Dense Video CaptioningActivityNet Captions (val)
METEOR8.58
54
Dense Video CaptioningActivityNet-Captions (test)
METEOR8.81
8
Temporal Action ProposalActivityNet Captions 1.0 (val)
AUC (tIoU=0.8)0.5913
5
Showing 3 of 3 rows

Other info

Code

Follow for update