Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Downstream Task Agnostic Speech Enhancement with Self-Supervised Representation Loss

About

Self-supervised learning (SSL) is the latest breakthrough in speech processing, especially for label-scarce downstream tasks by leveraging massive unlabeled audio data. The noise robustness of the SSL is one of the important challenges to expanding its application. We can use speech enhancement (SE) to tackle this issue. However, the mismatch between the SE model and SSL models potentially limits its effect. In this work, we propose a new SE training criterion that minimizes the distance between clean and enhanced signals in the feature representation of the SSL model to alleviate the mismatch. We expect that the loss in the SSL domain could guide SE training to preserve or enhance various levels of characteristics of the speech signals that may be required for high-level downstream tasks. Experiments show that our proposal improves the performance of an SE and SSL pipeline on five downstream tasks with noisy input while maintaining the SE performance.

Hiroshi Sato, Ryo Masumura, Tsubasa Ochiai, Marc Delcroix, Takafumi Moriya, Takanori Ashihara, Kentaro Shinayama, Saki Mizuno, Mana Ihori, Tomohiro Tanaka, Nobukatsu Hojo• 2023

Related benchmarks

TaskDatasetResultRank
Automatic Speech RecognitionLibriSpeech clean (test)
WER6.21
833
Phoneme RecognitionLibriSpeech clean outdoor 100h noise-augmented (test)
PER6.78
5
Phoneme RecognitionLibriSpeech clean + indoor noise 100h noise-augmented (test)
PER6.63
5
Phoneme RecognitionLibriSpeech clean 100h noise-augmented (test)
PER5.17
5
Automatic Speech RecognitionLibriSpeech clean + outdoor noise unseen (test)
WER9.19
5
Automatic Speech RecognitionLibriSpeech clean + indoor noise seen noise (test)
WER8.89
5
Showing 6 of 6 rows

Other info

Follow for update