Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Input Conditioned Layer Dropping in Speech Foundation Models

About

Curating foundation speech models for edge and IoT settings, where computational resources vary over time, requires dynamic architectures featuring adaptable reduction strategies. One emerging approach is layer dropping ($\mathcal{LD}$) which skips fraction of the layers of a backbone network during inference to reduce the computational load. This allows transforming static models into dynamic ones. However, existing approaches exhibit limitations either in the mode of selecting layers or by significantly modifying the neural architecture. To this end, we propose input-driven $\mathcal{LD}$ that employs the network's input features and a lightweight layer selecting network to determine the optimum combination of processing layers. Extensive experimentation on 4 speech and audio public benchmarks, using two different pre-trained foundation models, demonstrates the effectiveness of our approach, thoroughly outperforming random dropping and producing on-par (or better) results to early exit.

Abdul Hannan, Daniele Falavigna, Alessio Brutti• 2025

Related benchmarks

TaskDatasetResultRank
Automatic Speech RecognitionLibrispeech (test-clean)
WER5.47
84
Automatic Speech RecognitionTED-LIUM3 (test)
WER10.32
55
Showing 2 of 2 rows

Other info

Follow for update