MDM-ASR: Bridging Accuracy and Efficiency in ASR with Diffusion-Based Non-Autoregressive Decoding
About
In sequence-to-sequence Transformer ASR, autoregressive (AR) models achieve strong accuracy but suffer from slow decoding, while non-autoregressive (NAR) models enable parallel decoding at the cost of degraded performance. We propose a principled NAR ASR framework based on Masked Diffusion Models to reduce this gap. A pre-trained speech encoder is coupled with a Transformer diffusion decoder conditioned on acoustic features and partially masked transcripts for parallel token prediction. To mitigate the training-inference mismatch, we introduce Iterative Self-Correction Training that exposes the model to its own intermediate predictions. We also design a Position-Biased Entropy-Bounded Confidence-based sampler with positional bias to further boost results. Experiments across multiple benchmarks demonstrate consistent gains over prior NAR models and competitive performance with strong AR baselines, while retaining parallel decoding efficiency.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Automatic Speech Recognition | LibriSpeech (test-other) | WER3.6 | 966 | |
| Automatic Speech Recognition | LibriSpeech clean (test) | WER1.8 | 833 | |
| Automatic Speech Recognition | AMI | WER12.2 | 28 | |
| Automatic Speech Recognition | VoxPopuli | WER6 | 27 | |
| Automatic Speech Recognition | Earnings-22 | WER10.7 | 25 | |
| Automatic Speech Recognition | MLS FR (test) | WER3.8 | 10 | |
| Automatic Speech Recognition | MLS DE (test) | WER (%)3.6 | 10 | |
| Automatic Speech Recognition | MLS ES (test) | WER (%)2.9 | 10 |