Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

HyPoradise: An Open Baseline for Generative Speech Recognition with Large Language Models

About

Advancements in deep neural networks have allowed automatic speech recognition (ASR) systems to attain human parity on several publicly available clean speech datasets. However, even state-of-the-art ASR systems experience performance degradation when confronted with adverse conditions, as a well-trained acoustic model is sensitive to variations in the speech domain, e.g., background noise. Intuitively, humans address this issue by relying on their linguistic knowledge: the meaning of ambiguous spoken terms is usually inferred from contextual cues thereby reducing the dependency on the auditory system. Inspired by this observation, we introduce the first open-source benchmark to utilize external large language models (LLMs) for ASR error correction, where N-best decoding hypotheses provide informative elements for true transcription prediction. This approach is a paradigm shift from the traditional language model rescoring strategy that can only select one candidate hypothesis as the output transcription. The proposed benchmark contains a novel dataset, HyPoradise (HP), encompassing more than 334,000 pairs of N-best hypotheses and corresponding accurate transcriptions across prevalent speech domains. Given this dataset, we examine three types of error correction techniques based on LLMs with varying amounts of labeled hypotheses-transcription pairs, which gains a significant word error rate (WER) reduction. Experimental evidence demonstrates the proposed technique achieves a breakthrough by surpassing the upper bound of traditional re-ranking based methods. More surprisingly, LLM with reasonable prompt and its generative capability can even correct those tokens that are missing in N-best list. We make our results publicly accessible for reproducible pipelines with released pre-trained models, thus providing a new evaluation paradigm for ASR error correction with LLMs.

Chen Chen, Yuchen Hu, Chao-Han Huck Yang, Sabato Macro Siniscalchi, Pin-Yu Chen, Eng Siong Chng• 2023

Related benchmarks

TaskDatasetResultRank
Automatic Speech RecognitionLibriSpeech (test-other)
WER3.8
966
Automatic Speech RecognitionLibrispeech (test-clean)
WER1.7
84
Automatic Speech RecognitionAISHELL-1 (test)
CER5
71
ASR rescoringWSJ (test)
WER2.2
35
Automatic Speech RecognitionCHIME-4 (test)
WER5.49
23
ASR Error CorrectionCommonVoice (CV) (test)
WER6.8
18
Speech RecognitionCHiME-4 real (test)
WER6.75
18
ASR Error CorrectionSTOP (test)
WER7.4
18
Audio-Visual Speech RecognitionLRS3 (test)--
18
Automatic Speech RecognitionWSJ (test)
WER0.027
12
Showing 10 of 32 rows

Other info

Code

Follow for update