Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Unified Multisensory Perception: Weakly-Supervised Audio-Visual Video Parsing

About

In this paper, we introduce a new problem, named audio-visual video parsing, which aims to parse a video into temporal event segments and label them as either audible, visible, or both. Such a problem is essential for a complete understanding of the scene depicted inside a video. To facilitate exploration, we collect a Look, Listen, and Parse (LLP) dataset to investigate audio-visual video parsing in a weakly-supervised manner. This task can be naturally formulated as a Multimodal Multiple Instance Learning (MMIL) problem. Concretely, we propose a novel hybrid attention network to explore unimodal and cross-modal temporal contexts simultaneously. We develop an attentive MMIL pooling method to adaptively explore useful audio and visual content from different temporal extent and modalities. Furthermore, we discover and mitigate modality bias and noisy label issues with an individual-guided learning mechanism and label smoothing technique, respectively. Experimental results show that the challenging audio-visual video parsing can be achieved even with only video-level weak labels. Our proposed framework can effectively leverage unimodal and cross-modal temporal contexts and alleviate modality bias and noisy labels problems.

Yapeng Tian, Dingzeyu Li, Chenliang Xu• 2020

Related benchmarks

TaskDatasetResultRank
Audio-Visual Event LocalizationAVE
Accuracy75.3
35
Audio-visual video parsing (Segment-level)LLP (test)
Audio Score60.1
15
Audio-visual video parsing (Event-level)LLP (test)
Acc (A)51.3
15
Audio-Visual Video ParsingLLP 1.0 (test)
Segment-level Audio60.1
13
Audio-Visual Video ParsingLLP (test)
Audio Segment Score60.1
11
Image Guided Audio Temporal LocalizationLLP (test)
F1 Score48.93
5
Image Guided Audio Temporal LocalizationAudioSet Strong (test)
F1 Score49.2
5
Showing 7 of 7 rows

Other info

Follow for update