Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MelHuBERT: A simplified HuBERT on Mel spectrograms

About

Self-supervised models have had great success in learning speech representations that can generalize to various downstream tasks. However, most self-supervised models require a large amount of compute and multiple GPUs to train, significantly hampering the development of self-supervised learning. In an attempt to reduce the computation of training, we revisit the training of HuBERT, a highly successful self-supervised model. We improve and simplify several key components, including the loss function, input representation, and training in multiple stages. Our model, MelHuBERT, is able to achieve favorable performance on phone recognition, speaker identification, and automatic speech recognition against HuBERT, while saving 31.2% of the pre-training time, or equivalently 33.5% MACs per one second speech. The code and pre-trained models are available in https://github.com/nervjack2/MelHuBERT.

Tzu-Quan Lin, Hung-yi Lee, Hao Tang• 2022

Related benchmarks

TaskDatasetResultRank
Self-supervised pretrainingLibri-Light 6k
Pretraining Time (hr)300
7
Showing 1 of 1 rows

Other info

Follow for update