Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VLCap: Vision-Language with Contrastive Learning for Coherent Video Paragraph Captioning

About

In this paper, we leverage the human perceiving process, that involves vision and language interaction, to generate a coherent paragraph description of untrimmed videos. We propose vision-language (VL) features consisting of two modalities, i.e., (i) vision modality to capture global visual content of the entire scene and (ii) language modality to extract scene elements description of both human and non-human objects (e.g. animals, vehicles, etc), visual and non-visual elements (e.g. relations, activities, etc). Furthermore, we propose to train our proposed VLCap under a contrastive learning VL loss. The experiments and ablation studies on ActivityNet Captions and YouCookII datasets show that our VLCap outperforms existing SOTA methods on both accuracy and diversity metrics.

Kashu Yamazaki, Sang Truong, Khoa Vo, Michael Kidd, Chase Rainwater, Khoa Luu, Ngan Le• 2022

Related benchmarks

TaskDatasetResultRank
Video CaptioningYouCook II (val)
CIDEr49.41
98
Video Paragraph CaptioningActivityNet Captions ae (val)
METEOR17.78
43
Video Paragraph CaptioningActivityNet Captions ae (test)
BLEU@413.38
24
Video CaptioningActivityNet Captions
CIDEr30.3
10
Narrative Action EvaluationMTL-NAE re-annotated (test)
mAP19.7
7
Narrative Action EvaluationFineGym NAE re-annotated (test)
mAP8.6
7
Showing 6 of 6 rows

Other info

Code

Follow for update