Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MDM-ASR: Bridging Accuracy and Efficiency in ASR with Diffusion-Based Non-Autoregressive Decoding

About

In sequence-to-sequence Transformer ASR, autoregressive (AR) models achieve strong accuracy but suffer from slow decoding, while non-autoregressive (NAR) models enable parallel decoding at the cost of degraded performance. We propose a principled NAR ASR framework based on Masked Diffusion Models to reduce this gap. A pre-trained speech encoder is coupled with a Transformer diffusion decoder conditioned on acoustic features and partially masked transcripts for parallel token prediction. To mitigate the training-inference mismatch, we introduce Iterative Self-Correction Training that exposes the model to its own intermediate predictions. We also design a Position-Biased Entropy-Bounded Confidence-based sampler with positional bias to further boost results. Experiments across multiple benchmarks demonstrate consistent gains over prior NAR models and competitive performance with strong AR baselines, while retaining parallel decoding efficiency.

Hao Yen, Pin-Jui Ku, Ante Juki\'c, Sabato Marco Siniscalchi• 2026

Related benchmarks

TaskDatasetResultRank
Automatic Speech RecognitionLibriSpeech (test-other)
WER3.6
966
Automatic Speech RecognitionLibriSpeech clean (test)
WER1.8
833
Automatic Speech RecognitionAMI
WER12.2
28
Automatic Speech RecognitionVoxPopuli
WER6
27
Automatic Speech RecognitionEarnings-22
WER10.7
25
Automatic Speech RecognitionMLS FR (test)
WER3.8
10
Automatic Speech RecognitionMLS DE (test)
WER (%)3.6
10
Automatic Speech RecognitionMLS ES (test)
WER (%)2.9
10
Showing 8 of 8 rows

Other info

Follow for update