Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

FastWhisper: Adaptive Self-knowledge Distillation for Real-time Automatic Speech Recognition

About

Knowledge distillation is one of the most effective methods for model compression. Previous studies have focused on the student model effectively training the predictive distribution of the teacher model. However, during training, the student model may inherit the shortcomings of the teacher model, which can lead to a decline in generalization capacity. To mitigate this issue, we propose adaptive self-knowledge distillation (ASKD), which dynamically reduces the dependence of the teacher model to improve the self-training capacity, and performs the self-knowledge distillation method to improve the generalization capacity of the student model. We further distill the Whisper model into a smaller variant, called FastWhisper. In our post-training setting, FastWhisper achieved a word error rate of 1.07% lower than the teacher model Whisper, and its relative inference time was 5 times faster.

Junseok Lee, Nahoon Kim, Sangyong Lee, Chang-Jae Chun• 2026

Related benchmarks

TaskDatasetResultRank
Automatic Speech RecognitionLibriSpeech (test-other)
WER4.52
966
Automatic Speech RecognitionLibriSpeech clean (test)
WER2.34
833
Automatic Speech RecognitionAMI
WER10.6
28
Automatic Speech RecognitionEarnings-22
WER10.5
25
Automatic Speech RecognitionTED-LIUM
WER3.89
9
Showing 5 of 5 rows

Other info

Follow for update