Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VSTAR: A Video-grounded Dialogue Dataset for Situated Semantic Understanding with Scene and Topic Transitions

About

Video-grounded dialogue understanding is a challenging problem that requires machine to perceive, parse and reason over situated semantics extracted from weakly aligned video and dialogues. Most existing benchmarks treat both modalities the same as a frame-independent visual understanding task, while neglecting the intrinsic attributes in multimodal dialogues, such as scene and topic transitions. In this paper, we present Video-grounded Scene&Topic AwaRe dialogue (VSTAR) dataset, a large scale video-grounded dialogue understanding dataset based on 395 TV series. Based on VSTAR, we propose two benchmarks for video-grounded dialogue understanding: scene segmentation and topic segmentation, and one benchmark for video-grounded dialogue generation. Comprehensive experiments are performed on these benchmarks to demonstrate the importance of multimodal information and segments in video-grounded dialogue understanding and generation.

Yuxuan Wang, Zilong Zheng, Xueliang Zhao, Jinpeng Li, Yueqian Wang, Dongyan Zhao• 2023

Related benchmarks

TaskDatasetResultRank
Video Topic SegmentationAVLecture (test)
F1 Score53.45
14
Video Topic SegmentationCLVTS (test)
F1 Score34.55
12
Video-grounded dialogue generationVSTAR (test)
BLEU-10.092
9
Dialogue Scene SegmentationVSTAR (test)
mIoU53.6
7
Dialogue Topic SegmentationVSTAR
WinDif0.374
7
Video-grounded dialogue generationOpenViDial (test)
Win Rate20
2
Showing 6 of 6 rows

Other info

Code

Follow for update