Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

LESS: Large Language Model Enhanced Semi-Supervised Learning for Speech Foundational Models Using in-the-wild Data

About

Although state-of-the-art Speech Foundational Models can produce high-quality text pseudo-labels, applying Semi-Supervised Learning (SSL) for in-the-wild real-world data remains challenging due to its richer and more complex acoustics compared to curated datasets. To address the challenges, we introduce LESS (Large Language Model Enhanced Semi-supervised Learning), a versatile framework that uses Large Language Models (LLMs) to correct pseudo-labels generated on in-the-wild data. In the LESS framework, pseudo-labeled text from Automatic Speech Recognition (ASR) or Automatic Speech Translation (AST) of the unsupervised data is refined by an LLM, and further improved by a data filtering strategy. Across Mandarin ASR and Spanish-to-English AST evaluations, LESS delivers consistent gains, with an absolute Word Error Rate reduction of 3.8% on WenetSpeech, and BLEU score increase of 0.8 and 0.7, achieving 34.0 on Callhome and 64.7 on Fisher testsets respectively. These results highlight LESS's effectiveness across diverse languages, tasks, and domains. We have released the recipe as open source to facilitate further research in this area.

Wen Ding, Fan Qian• 2025

Related benchmarks

TaskDatasetResultRank
Automatic Speech RecognitionAISHELL-1 (test)--
97
Automatic Speech RecognitionWenetSpeech Meeting (test)--
78
ES-to-EN ASTCALLHOME (test)
BLEU Score34
4
ES-to-EN ASTFisher (test)
BLEU64.7
4
ES-to-EN ASTCommon Voice (test)
BLEU37.3
3
Showing 5 of 5 rows

Other info

Follow for update