Resource-Efficient Separation Transformer
About
Transformers have recently achieved state-of-the-art performance in speech separation. These models, however, are computationally demanding and require a lot of learnable parameters. This paper explores Transformer-based speech separation with a reduced computational cost. Our main contribution is the development of the Resource-Efficient Separation Transformer (RE-SepFormer), a self-attention-based architecture that reduces the computational burden in two ways. First, it uses non-overlapping blocks in the latent space. Second, it operates on compact latent summaries calculated from each chunk. The RE-SepFormer reaches a competitive performance on the popular WSJ0-2Mix and WHAM! datasets in both causal and non-causal settings. Remarkably, it scales significantly better than the previous Transformer-based architectures in terms of memory and inference time, making it more suitable for processing long mixtures.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multi-target Sound Extraction | FSD Kaggle + TAU Urban Acoustic Scenes synthetic mixture 2018 2019 (test) | SI-SNRi (1 Class)7.42 | 6 | |
| Single-target sound extraction | FSD Kaggle 2018 and TAU Urban Acoustic Scenes 2019 (test) | SI-SNRi7.26 | 6 |