Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Robust LLM-based Audio-Visual Speech Recognition with Sparse Modality Alignment and Visual Unit-Guided Refinement

About

Audio-Visual Speech Recognition (AVSR) integrates acoustic and visual information to enhance robustness in adverse acoustic conditions. Recent advances in Large Language Models (LLMs) have yielded competitive automatic speech recognition performance and shown effectiveness for AVSR. However, prior approaches project audio and visual features independently or apply shallow fusion, limiting cross-modal alignment and complementary exchange while increasing the LLM's computational load. To address this, we propose AVUR-LLM, an LLM-based Audio-Visual Speech Recognition via Sparse Modality Alignment and Visual Unit-Guided Refinement. Experiments on LRS3 demonstrate state-of-the-art results for AVSR. Under additive-noise conditions at 0 dB SNR, it achieves 37% relative improvement over the baseline system.

Fei Su, Cancan Li, Juan Liu, Wei Ju, Hongbin Suo, Ming Li• 2026

Related benchmarks

TaskDatasetResultRank
Audio-Visual Speech RecognitionLRS3 (test)
WER0.68
77
Audio-Visual Speech RecognitionLRS3 clean (test)
WER0.75
77
Automatic Speech RecognitionLRS3 (test)--
58
Audio-Visual Speech RecognitionLRS3 433 h 0 dB SNR
WER1.7
7
Audio-Visual Speech RecognitionLRS3 433 h 5 dB SNR
WER1.4
7
Audio-Visual Speech RecognitionLRS3 433 h 10 dB SNR
WER1
4
Showing 6 of 6 rows

Other info

Follow for update