Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DRED: Deep REDundancy Coding of Speech Using a Rate-Distortion-Optimized Variational Autoencoder

About

Despite recent advancements in packet loss concealment (PLC) using deep learning techniques, packet loss remains a significant challenge in real-time speech communication. Redundancy has been used in the past to recover the missing information during losses. However, conventional redundancy techniques are limited in the maximum loss duration they can cover and are often unsuitable for burst packet loss. We propose a new approach based on a rate-distortion-optimized variational autoencoder (RDO-VAE), allowing us to optimize a deep speech compression algorithm for the task of encoding large amounts of redundancy at very low bitrate. The proposed Deep REDundancy (DRED) algorithm can transmit up to 50x redundancy using less than 32 kb/s. Results show that DRED outperforms the existing Opus codec redundancy. We also demonstrate its benefits when operating in the context of WebRTC.

Jean-Marc Valin, Jan B\"uthe, Ahmed Mustafa, Michael Klingbeil• 2022

Related benchmarks

TaskDatasetResultRank
Speech Quality AssessmentLibriSpeech 5% packet loss (test)--
8
Speech ReconstructionLibriSpeech 30% packet loss (test)
PESQ2.14
5
Speech ReconstructionLibriSpeech 10% packet loss (test)
PESQ2.99
5
Speech ReconstructionLibriSpeech 20% packet loss (test)
PESQ2.48
5
Showing 4 of 4 rows

Other info

Follow for update