Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Knowledge Transfer from Pre-trained Language Models to Cif-based Speech Recognizers via Hierarchical Distillation

About

Large-scale pre-trained language models (PLMs) have shown great potential in natural language processing tasks. Leveraging the capabilities of PLMs to enhance automatic speech recognition (ASR) systems has also emerged as a promising research direction. However, previous works may be limited by the inflexible structures of PLMs and the insufficient utilization of PLMs. To alleviate these problems, we propose the hierarchical knowledge distillation (HKD) on the continuous integrate-and-fire (CIF) based ASR models. To transfer knowledge from PLMs to the ASR models, HKD employs cross-modal knowledge distillation with contrastive loss at the acoustic level and knowledge distillation with regression loss at the linguistic level. Compared with the original CIF-based model, our method achieves 15% and 9% relative error rate reduction on the AISHELL-1 and LibriSpeech datasets, respectively.

Minglun Han, Feilong Chen, Jing Shi, Shuang Xu, Bo Xu• 2023

Related benchmarks

TaskDatasetResultRank
Automatic Speech RecognitionAISHELL-1 (test)
CER4.1
71
Automatic Speech RecognitionAISHELL-1 (dev)
CER3.8
34
Showing 2 of 2 rows

Other info

Code

Follow for update