Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Single and Few-step Diffusion for Generative Speech Enhancement

About

Diffusion models have shown promising results in speech enhancement, using a task-adapted diffusion process for the conditional generation of clean speech given a noisy mixture. However, at test time, the neural network used for score estimation is called multiple times to solve the iterative reverse process. This results in a slow inference process and causes discretization errors that accumulate over the sampling trajectory. In this paper, we address these limitations through a two-stage training approach. In the first stage, we train the diffusion model the usual way using the generative denoising score matching loss. In the second stage, we compute the enhanced signal by solving the reverse process and compare the resulting estimate to the clean speech target using a predictive loss. We show that using this second training stage enables achieving the same performance as the baseline model using only 5 function evaluations instead of 60 function evaluations. While the performance of usual generative diffusion algorithms drops dramatically when lowering the number of function evaluations (NFEs) to obtain single-step diffusion, we show that our proposed method keeps a steady performance and therefore largely outperforms the diffusion baseline in this setting and also generalizes better than its predictive counterpart.

Bunlong Lay, Jean-Marie Lemercier, Julius Richter, Timo Gerkmann• 2023

Related benchmarks

TaskDatasetResultRank
Speech EnhancementVoiceBank + DEMAND (VB-DMD) (test)
PESQ2.31
105
Speech EnhancementURGENT 2024 (test)
PESQ3.1
12
Speech EnhancementURGENT Speech Enhancement Challenge 50-sample 2024 (test)
MOS3.71
12
Showing 3 of 3 rows

Other info

Follow for update