Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning Modality Interaction for Temporal Sentence Localization and Event Captioning in Videos

About

Automatically generating sentences to describe events and temporally localizing sentences in a video are two important tasks that bridge language and videos. Recent techniques leverage the multimodal nature of videos by using off-the-shelf features to represent videos, but interactions between modalities are rarely explored. Inspired by the fact that there exist cross-modal interactions in the human brain, we propose a novel method for learning pairwise modality interactions in order to better exploit complementary information for each pair of modalities in videos and thus improve performances on both tasks. We model modality interaction in both the sequence and channel levels in a pairwise fashion, and the pairwise interaction also provides some explainability for the predictions of target tasks. We demonstrate the effectiveness of our method and validate specific design choices through extensive ablation studies. Our method turns out to achieve state-of-the-art performances on four standard benchmark datasets: MSVD and MSR-VTT (event captioning task), and Charades-STA and ActivityNet Captions (temporal sentence localization task).

Shaoxiang Chen, Wenhao Jiang, Wei Liu, Yu-Gang Jiang• 2020

Related benchmarks

TaskDatasetResultRank
Video CaptioningMSVD
CIDEr95.1
128
Video CaptioningMSR-VTT (test)
CIDEr49.4
121
Video CaptioningMSVD (test)
CIDEr95.1
111
Video CaptioningMSRVTT
CIDEr49.4
101
Video CaptioningMSRVTT (test)
CIDEr49.4
61
Temporal GroundingActivityNet Captions
Recall@1 (IoU=0.5)38.28
45
Video GroundingActivityNet Captions
R@1 (IoU=0.5)38.28
43
Video CaptioningMSRVTT (val)
BLEU@442.1
11
Showing 8 of 8 rows

Other info

Follow for update