Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Two-Stage Acoustic Adaptation with Gated Cross-Attention Adapters for LLM-Based Multi-Talker Speech Recognition

About

Large Language Models (LLMs) are strong decoders for Serialized Output Training (SOT) in two-talker Automatic Speech Recognition (ASR), yet their performance degrades substantially in challenging conditions such as three-talker mixtures. A key limitation is that current systems inject acoustic evidence only through a projected prefix, which can be lossy and imperfectly aligned with the LLM input space, providing insufficient fine-grained grounding during decoding. Addressing this limitation is crucial for robust multi-talker ASR, especially in three-talker mixtures. This paper improves LLM-based multi-talker ASR by explicitly injecting talker-aware acoustic evidence into the decoder. We first revisit Connectionist Temporal Classification (CTC)-derived prefix prompting and compare three variants with increasing acoustic content. The CTC information is obtained using the serialized CTC proposed in our previous works. While acoustic-enriched prompts outperform the SOT-only baseline, prefix-only conditioning remains inadequate for three-talker mixtures. We therefore propose a lightweight gated residual cross-attention adapter and design a two-stage acoustic adaptation framework based on low-rank updates (LoRA). In Stage 1, we insert gated cross-attention adapters after the self-attention sub-layer to stably inject acoustic embeddings as external memory. In Stage 2, we refine both the cross-attention adapters and the pretrained LLM's self-attention projections using parameter-efficient LoRA, improving robustness for large backbones under limited data; the learned updates are merged into the base weights for inference. Experiments on Libri2Mix/Libri3Mix under clean and noisy conditions show consistent gains, with particularly large improvements in three-talker settings.

Hao Shi, Yuan Gao, Xugang Lu, Tatsuya Kawahara• 2026

Related benchmarks

TaskDatasetResultRank
Multi-talker Automatic Speech RecognitionLibri2Mix Clean (dev)
WER3
23
Multi-talker Automatic Speech RecognitionLibri2Mix Noisy (Eval)
WER7.5
22
Multi-talker Automatic Speech RecognitionLibri3Mix Clean (Eval)
WER8.7
20
Multi-talker Automatic Speech RecognitionLibri3Mix Noisy (eval)
WER17.1
19
Multi-talker Automatic Speech RecognitionLibri2Mix Noisy (dev)
WER8.4
17
Multi-talker Automatic Speech RecognitionLibri3Mix Noisy (dev)
WER18.8
17
Multi-talker Automatic Speech RecognitionLibri3Mix Clean (dev)
WER8.6
17
Multi-talker Automatic Speech RecognitionLibri2Mix Clean (test)
WER3.1
16
Showing 8 of 8 rows

Other info

Follow for update