Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Co-Separating Sounds of Visual Objects

About

Learning how objects sound from video is challenging, since they often heavily overlap in a single audio channel. Current methods for visually-guided audio source separation sidestep the issue by training with artificially mixed video clips, but this puts unwieldy restrictions on training data collection and may even prevent learning the properties of "true" mixed sounds. We introduce a co-separation training paradigm that permits learning object-level sounds from unlabeled multi-source videos. Our novel training objective requires that the deep neural network's separated audio for similar-looking objects be consistently identifiable, while simultaneously reproducing accurate video-level audio tracks for each source training pair. Our approach disentangles sounds in realistic test videos, even in cases where an object was not observed individually during training. We obtain state-of-the-art results on visually-guided audio source separation and audio denoising for the MUSIC, AudioSet, and AV-Bench datasets.

Ruohan Gao, Kristen Grauman• 2019

Related benchmarks

TaskDatasetResultRank
Audio Source SeparationMUSIC (test)
SDR7.64
8
Audio-visual source separationAudioSet
SDR4.26
6
Audio-visual source separationSOLOS
SDR7.11
6
Audio-visual source separationMUSIC solos
SDR7.38
6
Audio-visual source separationMUSIC duets
SDR7.42
6
Direction PredictionASIW (test)
10-class Acc32.2
6
Direction PredictionAVE (test)
Accuracy (10-class)30.2
6
Audio SeparationAudio Separation in the Wild (ASIW) (test)
SDR6.6
6
Audio SeparationAudio-Visual Event (AVE) (test)
SDR3.9
6
Audio Source SeparationAudioSet SingleSource (test)
SDR4.26
5
Showing 10 of 13 rows

Other info

Code

Follow for update