Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Textually Pretrained Speech Language Models

About

Speech language models (SpeechLMs) process and generate acoustic data only, without textual supervision. In this work, we propose TWIST, a method for training SpeechLMs using a warm-start from a pretrained textual language models. We show using both automatic and human evaluations that TWIST outperforms a cold-start SpeechLM across the board. We empirically analyze the effect of different model design choices such as the speech tokenizer, the pretrained textual model, and the dataset size. We find that model and dataset scale both play an important role in constructing better-performing SpeechLMs. Based on our observations, we present the largest (to the best of our knowledge) SpeechLM both in terms of number of parameters and training data. We additionally introduce two spoken versions of the StoryCloze textual benchmark to further improve model evaluation and advance future research in the field. We make speech samples, code and models publicly available: https://pages.cs.huji.ac.il/adiyoss-lab/twist/ .

Michael Hassid, Tal Remez, Tu Anh Nguyen, Itai Gat, Alexis Conneau, Felix Kreuk, Jade Copet, Alexandre Defossez, Gabriel Synnaeve, Emmanuel Dupoux, Roy Schwartz, Yossi Adi• 2023

Related benchmarks

TaskDatasetResultRank
Acoustic ConsistencySALMon (continuation)
Sentiment Consistency51
25
Language ModelingPre-training (val)
PPL5.34
13
Speech Semantic UnderstandingsBLIMP
sBLIMP Score59
10
Speech Semantic UnderstandingsWUGGY
sWUGGY Accuracy73.9
10
Semantic understandingTopic-StoryCloze S→S
Accuracy76.4
10
Grammatical IntegritysBLIMP
sBLIMP Accuracy59
10
Speech Acoustic UnderstandingSALMon
Salmon Score61.6
10
Audio-to-Audio Story ContinuationStoryCloze tSC
A2A-tSC Score74.1
10
Structural ConsistencysWUGGY
sWUGGY Structural Consistency73.9
8
Zero-shot Speech EvaluationsWUGGY
sWUGGY In-Vocab Score84.1
7
Showing 10 of 18 rows

Other info

Code

Follow for update