Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Exploiting Transformation Invariance and Equivariance for Self-supervised Sound Localisation

About

We present a simple yet effective self-supervised framework for audio-visual representation learning, to localize the sound source in videos. To understand what enables to learn useful representations, we systematically investigate the effects of data augmentations, and reveal that (1) composition of data augmentations plays a critical role, i.e. explicitly encouraging the audio-visual representations to be invariant to various transformations~({\em transformation invariance}); (2) enforcing geometric consistency substantially improves the quality of learned representations, i.e. the detected sound source should follow the same transformation applied on input video frames~({\em transformation equivariance}). Extensive experiments demonstrate that our model significantly outperforms previous methods on two sound localization benchmarks, namely, Flickr-SoundNet and VGG-Sound. Additionally, we also evaluate audio retrieval and cross-modal retrieval tasks. In both cases, our self-supervised models demonstrate superior retrieval performances, even competitive with the supervised approach in audio retrieval. This reveals the proposed framework learns strong multi-modal representations that are beneficial to sound localisation and generalization to further applications. \textit{All codes will be available}.

Jinxiang Liu, Chen Ju, Weidi Xie, Ya Zhang• 2022

Related benchmarks

TaskDatasetResultRank
Sound Source LocalizationFlickr SoundNet (test)
CIoU79.5
28
Audio-visual source localizationFlickr-SoundNet 10k
CIoU75.5
14
Audio-visual source localizationFlickr-SoundNet 144k
CIoU81.5
14
Audio referred image groundingPascalSound (test)
cIoU52.14
10
Audio referred image groundingAVSBench (test)
cIoU62.88
10
Audio referred image groundingVGG-SS (test)
cIoU38.63
10
Showing 6 of 6 rows

Other info

Follow for update