VSTAR: A Video-grounded Dialogue Dataset for Situated Semantic Understanding with Scene and Topic Transitions
About
Video-grounded dialogue understanding is a challenging problem that requires machine to perceive, parse and reason over situated semantics extracted from weakly aligned video and dialogues. Most existing benchmarks treat both modalities the same as a frame-independent visual understanding task, while neglecting the intrinsic attributes in multimodal dialogues, such as scene and topic transitions. In this paper, we present Video-grounded Scene&Topic AwaRe dialogue (VSTAR) dataset, a large scale video-grounded dialogue understanding dataset based on 395 TV series. Based on VSTAR, we propose two benchmarks for video-grounded dialogue understanding: scene segmentation and topic segmentation, and one benchmark for video-grounded dialogue generation. Comprehensive experiments are performed on these benchmarks to demonstrate the importance of multimodal information and segments in video-grounded dialogue understanding and generation.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Video Topic Segmentation | AVLecture (test) | F1 Score53.45 | 14 | |
| Video Topic Segmentation | CLVTS (test) | F1 Score34.55 | 12 | |
| Video-grounded dialogue generation | VSTAR (test) | BLEU-10.092 | 9 | |
| Dialogue Scene Segmentation | VSTAR (test) | mIoU53.6 | 7 | |
| Dialogue Topic Segmentation | VSTAR | WinDif0.374 | 7 | |
| Video-grounded dialogue generation | OpenViDial (test) | Win Rate20 | 2 |