Adapting Speech Foundation Models for Unified Multimodal Speech Recognition with Large Language Models
About
While speech foundation models (SFMs) have demonstrated remarkable performance in audio-only tasks, their adaptation to multimodal scenarios remains underexplored. This work presents UASR-LLM, a novel framework that adapts frozen SFMs to unified visual speech recognition (VSR), automatic speech recognition (ASR), and audio-visual speech recognition (AVSR) by leveraging large language models (LLMs) as text decoders. Visual representations are injected into multiple SFM layers via visual injection modules, enabling multimodal fusion and unified representation learning. The augmented SFMs are connected to decoder-only LLMs through a feed-forward adaptor, where concatenated representations and instruction prompts guide transcription. We propose a two-stage training strategy consisting of visual injection pretraining followed by speech recognition finetuning. The pretraining stage aligns audio, visual, and audio-visual representations within the frozen SFM backbone, while the finetuning stage integrates LLMs for unified optimization across speech recognition tasks. Experimental results demonstrate superior performance over state-of-the-art baselines across VSR, ASR, and AVSR under both clean and noisy conditions. Ablation studies further confirm generalization across various SFMs and LLMs, validating the effectiveness of the proposed training strategy.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Visual Speech Recognition | LRS3 (test) | WER20.9 | 209 | |
| Audio-Visual Speech Recognition | LRS3 (test) | WER0.69 | 77 | |
| Automatic Speech Recognition | LRS3 (test) | -- | 58 | |
| Visual Speech Recognition | LRS3 30h labeled low-resource (test) | WER25.3 | 28 | |
| Automatic Speech Recognition | LRS3 30h labeled low-resource (test) | WER2.2 | 26 | |
| Audio-Visual Speech Recognition | LRS3 30h labeled low-resource (test) | WER1.8 | 22 |