Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Contrastive Learning for Task-Independent SpeechLLM-Pretraining

About

Large language models (LLMs) excel in natural language processing but adapting these LLMs to speech processing tasks efficiently is not straightforward. Direct task-specific fine-tuning is limited by overfitting risks, data requirements, and computational costs. To address these challenges, we propose a scalable, two-stage training approach: (1) A task-independent speech pretraining stage using contrastive learning to align text and speech representations over all layers, followed by (2) a task-specific fine-tuning stage requiring minimal data. This approach outperforms traditional ASR pretraining and enables the model to surpass models specialized on speech translation and question answering while being trained on only 10% of the task-specific data.

Maike Z\"ufle, Jan Niehues• 2024

Related benchmarks

TaskDatasetResultRank
Contrastive AlignmentMuST-C (test)
Cosine Similarity1.33
36
Speech RecognitionMuST-C (test)
WER (Avg)9.31
30
Speech TranslationMuST-C (test)
BLEU Score31.54
29
Speech Question AnsweringMuST-C (test)
EM76.11
27
Automatic Speech RecognitionMuST-C En-De COMMON (test)
WER9.31
16
Overall PerformanceMust-C & Spoken-SQuAD
Normalized Average1.1418
15
Speech TranslationMust-C
BLEU31.54
15
Spoken Question AnsweringSpoken-SQuAD
EM76.11
15
Showing 8 of 8 rows

Other info

Code

Follow for update