Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Reducing Prompt Sensitivity in LLM-based Speech Recognition Through Learnable Projection

About

LLM-based automatic speech recognition (ASR), a well-established approach, connects speech foundation models to large language models (LLMs) through a speech-to-LLM projector, yielding promising results. A common design choice in these architectures is the use of a fixed, manually defined prompt during both training and inference. This setup not only enables applicability across a range of practical scenarios, but also helps maximize model performance. However, the impact of prompt design remains underexplored. This paper presents a comprehensive analysis of commonly used prompts across diverse datasets, showing that prompt choice significantly affects ASR performance and introduces instability, with no single prompt performing best across all cases. Inspired by the speech-to-LLM projector, we propose a prompt projector module, a simple, model-agnostic extension that learns to project prompt embeddings to more effective regions of the LLM input space, without modifying the underlying LLM-based ASR model. Experiments on four datasets show that the addition of a prompt projector consistently improves performance, reduces variability, and outperforms the best manually selected prompts.

Sergio Burdisso, Esa\'u Villatoro-Tello, Shashi Kumar, Srikanth Madikeri, Andr\'es Carofilis, Pradeep Rangappa, Manjunath K E, Kadri Hacioglu, Petr Motlicek, Andreas Stolcke• 2026

Related benchmarks

TaskDatasetResultRank
Automatic Speech RecognitionAMI
WER12.72
28
Automatic Speech RecognitionLS Clean
WER2.16
25
Automatic Speech RecognitionLS-O
WER4.66
14
Automatic Speech RecognitionCC
WER11.14
9
Automatic Speech RecognitionCH
WER0.2448
9
Showing 5 of 5 rows

Other info

Follow for update