Real-Time Target Sound Extraction
About
We present the first neural network model to achieve real-time and streaming target sound extraction. To accomplish this, we propose Waveformer, an encoder-decoder architecture with a stack of dilated causal convolution layers as the encoder, and a transformer decoder layer as the decoder. This hybrid architecture uses dilated causal convolutions for processing large receptive fields in a computationally efficient manner while also leveraging the generalization performance of transformer-based architectures. Our evaluations show as much as 2.2-3.3 dB improvement in SI-SNRi compared to the prior models for this task while having a 1.2-4x smaller model size and a 1.5-2x lower runtime. We provide code, dataset, and audio samples: https://waveformer.cs.washington.edu/.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multi-target Sound Extraction | FSD Kaggle + TAU Urban Acoustic Scenes synthetic mixture 2018 2019 (test) | SI-SNRi (1 Class)9.39 | 6 | |
| Single-target sound extraction | FSD Kaggle 2018 and TAU Urban Acoustic Scenes 2019 (test) | SI-SNRi9.43 | 6 | |
| Target Sound Extraction | testset (test) | SI-SNRi11.31 | 3 |