Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A Unified Audio-Visual Learning Framework for Localization, Separation, and Recognition

About

The ability to accurately recognize, localize and separate sound sources is fundamental to any audio-visual perception task. Historically, these abilities were tackled separately, with several methods developed independently for each task. However, given the interconnected nature of source localization, separation, and recognition, independent models are likely to yield suboptimal performance as they fail to capture the interdependence between these tasks. To address this problem, we propose a unified audio-visual learning framework (dubbed OneAVM) that integrates audio and visual cues for joint localization, separation, and recognition. OneAVM comprises a shared audio-visual encoder and task-specific decoders trained with three objectives. The first objective aligns audio and visual representations through a localized audio-visual correspondence loss. The second tackles visual source separation using a traditional mix-and-separate framework. Finally, the third objective reinforces visual feature separation and localization by mixing images in pixel space and aligning their representations with those of all corresponding sound sources. Extensive experiments on MUSIC, VGG-Instruments, VGG-Music, and VGGSound datasets demonstrate the effectiveness of OneAVM for all three tasks, audio-visual source localization, separation, and nearest neighbor recognition, and empirically demonstrate a strong positive transfer between them.

Shentong Mo, Pedro Morgado• 2023

Related benchmarks

TaskDatasetResultRank
Sound source separationmusic
SDR7.38
7
Sound source separationVGGS-Instruments
SDR5.36
7
Sound source separationVGGS-Music
SDR2.51
7
Showing 3 of 3 rows

Other info

Follow for update