Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Distillation-based Layer Dropping (DLD): Effective End-to-end Framework for Dynamic Speech Networks

About

Edge devices operate in constrained and varying resource settings, requiring dynamic architectures that can adapt to limitations of the available resources. To meet such demands, layer dropping ($\mathcal{LD}$) approach is typically used to transform static models into dynamic ones by skipping parts of the network along with reducing overall computational complexity. However, existing $\mathcal{LD}$ methods greatly impact the dynamic model's performance for low and high dropping cases, deteriorating the performance-computation trade-off. To this end, we propose a distillation-based layer dropping (DLD) framework that effectively combines the capabilities of knowledge distillation and $\mathcal{LD}$ in an end-to-end fashion, thereby achieving state-of-the-art performance for dynamic speech networks. Comprehensive experimentation utilizing well-known speech recognition methods, including conformer and WavLM, on three public benchmarks demonstrates the effectiveness of our framework, reducing the word error rate by $9.32\%$ and $2.25\%$ for high and no dropping cases with $33.3\%$ reduction in training time.

Abdul Hannan, Daniele Falavigna, Shah Nawaz, Mubashir Noman, Markus Schedl, Alessio Brutti• 2026

Related benchmarks

TaskDatasetResultRank
Automatic Speech RecognitionLibrispeech (test-clean)
WER4.57
84
Automatic Speech RecognitionTED-LIUM3 (test)
WER9.19
55
Automatic Speech RecognitionLibriSpeech 1000 (test-clean)
WER5.82
19
Showing 3 of 3 rows

Other info

Follow for update