Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Chain of Correction for Full-text Speech Recognition with Large Language Models

About

Full-text error correction with Large Language Models (LLMs) for Automatic Speech Recognition (ASR) is attracting increased attention for its ability to address a wide range of error types, such as punctuation restoration and inverse text normalization, across long context. However, challenges remain regarding stability, controllability, completeness, and fluency. To mitigate these issues, this paper proposes the Chain of Correction (CoC), which uses a multi-turn chat format to correct errors segment by segment, guided by pre-recognized text and full-text context for better semantic understanding. Utilizing the open-sourced ChFT dataset, we fine-tune a pre-trained LLM to evaluate CoC's performance. Experiments show that CoC significantly outperforms baseline and benchmark systems in correcting full-text ASR outputs. We also analyze correction thresholds to balance under-correction and over-rephrasing, extrapolate CoC on extra-long ASR outputs, and explore using other types of information to guide error correction.

Zhiyuan Tang, Dong Wang, Zhikai Zhou, Yong Liu, Shen Huang, Shidong Shang• 2025

Related benchmarks

TaskDatasetResultRank
Full-text Error CorrectionChFT Homogeneous 1.0
ER (Mandarin)4.06
7
Full-text Error CorrectionChFT Hard 1.0
ER (Mandarin)17.8
3
ASR Error CorrectionChFT extra-long (test)
Mandarin Error Rate4.19
2
Showing 3 of 3 rows

Other info

Follow for update