Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Robust wav2vec 2.0: Analyzing Domain Shift in Self-Supervised Pre-Training

About

Self-supervised learning of speech representations has been a very active research area but most work is focused on a single domain such as read audio books for which there exist large quantities of labeled and unlabeled data. In this paper, we explore more general setups where the domain of the unlabeled data for pre-training data differs from the domain of the labeled data for fine-tuning, which in turn may differ from the test data domain. Our experiments show that using target domain data during pre-training leads to large performance improvements across a variety of setups. On a large-scale competitive setup, we show that pre-training on unlabeled in-domain data reduces the gap between models trained on in-domain and out-of-domain labeled data by 66%-73%. This has obvious practical implications since it is much easier to obtain unlabeled target domain data than labeled data. Moreover, we find that pre-training on multiple domains improves generalization performance on domains not seen during training. Code and models will be made available at https://github.com/pytorch/fairseq.

Wei-Ning Hsu, Anuroop Sriram, Alexei Baevski, Tatiana Likhomanenko, Qiantong Xu, Vineel Pratap, Jacob Kahn, Ann Lee, Ronan Collobert, Gabriel Synnaeve, Michael Auli• 2021

Related benchmarks

TaskDatasetResultRank
Long-form TranscriptionEarnings-22
WER31
27
Long-form TranscriptionEarnings-21
WER23
26
Long-form TranscriptionCORAAL
WER36.8
21
Long-form TranscriptionMeanwhile
WER15.2
19
Long-form TranscriptionKincaid46
WER22.9
19
Long-form TranscriptionRev 16
WER0.234
19
Long-form TranscriptionTED-LIUM3
WER8.8
19
Showing 7 of 7 rows

Other info

Follow for update