Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Exploiting Attention-based Sequence-to-Sequence Architectures for Sound Event Localization

About

Sound event localization frameworks based on deep neural networks have shown increased robustness with respect to reverberation and noise in comparison to classical parametric approaches. In particular, recurrent architectures that incorporate temporal context into the estimation process seem to be well-suited for this task. This paper proposes a novel approach to sound event localization by utilizing an attention-based sequence-to-sequence model. These types of models have been successfully applied to problems in natural language processing and automatic speech recognition. In this work, a multi-channel audio signal is encoded to a latent representation, which is subsequently decoded to a sequence of estimated directions-of-arrival. Herein, attentions allow for capturing temporal dependencies in the audio signal by focusing on specific frames that are relevant for estimating the activity and direction-of-arrival of sound events at the current time-step. The framework is evaluated on three publicly available datasets for sound event localization. It yields superior localization performance compared to state-of-the-art methods in both anechoic and reverberant conditions.

Christopher Schymura, Tsubasa Ochiai, Marc Delcroix, Keisuke Kinoshita, Tomohiro Nakatani, Shoko Araki, Dorothea Kolossa• 2021

Related benchmarks

TaskDatasetResultRank
Source LocalizationRESYN reverberant (D2)
EMD25.09
7
Source LocalizationRESYN reverberant (D3)
EMD35.05
7
Source LocalizationANSYN 1.0 (D2)
EMD14.65
7
Source LocalizationANSYN D3 1.0
EMD23.41
7
Source LocalizationRESYN reverberant (D1)
EMD10.53
7
Source LocalizationANSYN 1.0 (D1)
EMD6.22
7
Showing 6 of 6 rows

Other info

Follow for update