Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning Speech Representations with Variational Predictive Coding

About

Despite being the best known objective for learning speech representations, the HuBERT objective has not been further developed and improved. We argue that it is the lack of an underlying principle that stalls the development, and, in this paper, we show that predictive coding under a variational view is the principle behind the HuBERT objective. Due to its generality, our formulation provides opportunities to improve parameterization and optimization, and we show two simple modifications that bring immediate improvements to the HuBERT objective. In addition, the predictive coding formulation has tight connections to various other objectives, such as APC, CPC, wav2vec, and BEST-RQ. Empirically, the improvement in pre-training brings significant improvements to four downstream tasks: phone classification, f0 tracking, speaker recognition, and automatic speech recognition, highlighting the importance of the predictive coding interpretation.

Sung-Lin Yeh, Peter Bell, Hao Tang• 2025

Related benchmarks

TaskDatasetResultRank
Speech RecognitionWSJ (92-eval)
WER4.4
131
Speaker VerificationVoxCeleb1 (test)
Cosine EER14.4
80
Speech RecognitionWSJ nov93 (dev)
WER13.6
52
Phoneme RecognitionTIMIT (test)
PER11
31
Phoneme RecognitionTIMIT (dev)
PER9.5
20
Automatic Speech Recognition80-hour WSJ (dev93)
WER6.8
16
f0 trackingWall Street Journal (eval92)
RMSE20.9
3
Phone ClassificationWSJ (dev93)
Phoneme Error Rate (PER)11.8
3
Phone ClassificationWSJ (eval92)
PER11.3
3
Showing 9 of 9 rows

Other info

Follow for update