Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Self-Attention for Audio Super-Resolution

About

Convolutions operate only locally, thus failing to model global interactions. Self-attention is, however, able to learn representations that capture long-range dependencies in sequences. We propose a network architecture for audio super-resolution that combines convolution and self-attention. Attention-based Feature-Wise Linear Modulation (AFiLM) uses self-attention mechanism instead of recurrent neural networks to modulate the activations of the convolutional model. Extensive experiments show that our model outperforms existing approaches on standard benchmarks. Moreover, it allows for more parallelization resulting in significantly faster training.

Nathana\"el Carraz Rakotonirina• 2021

Related benchmarks

TaskDatasetResultRank
Audio Super-ResolutionPiano (test)
SNR25.7
23
Audio Super-ResolutionVCTK Multi-speaker (test)
SNR20
15
Audio Super-ResolutionVCTK Single-speaker (test)
SNR19.3
15
Showing 3 of 3 rows

Other info

Follow for update