Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Scaling Analysis of Interleaved Speech-Text Language Models

About

Existing Speech Language Model (SLM) scaling analysis paints a bleak picture. It predicts that SLMs require much more compute and data compared to text, leading some to question the feasibility of training high-quality SLMs. However, modern SLMs are often initialised from pre-trained TextLMs using speech-text interleaving to allow knowledge transfer. This raises the question - "Do interleaved SLMs scale more efficiently than textless-SLMs?" In this paper we answer a resounding yes! We conduct scaling analysis of interleaved SLMs by training several dozen and analysing the scaling trends. We see that under this setup SLMs scale more efficiently with compute. Additionally, our results indicate that the scaling dynamics significantly differ from textless-SLMs, suggesting one should allocate notably more of the compute budget to increasing model size over training tokens. We also study the role of synthetic data and TextLM model families in unlocking this potential. Results suggest that our scaled up model achieves comparable semantic speech performance to leading models, while using less compute and data. We open source models, samples, and data - https://pages.cs.huji.ac.il/adiyoss-lab/sims/ .

Gallil Maimon, Michael Hassid, Amit Roth, Yossi Adi• 2025

Related benchmarks

TaskDatasetResultRank
Audio-to-Audio Story ContinuationStoryCloze tSC
A2A-tSC Score88.3
10
Grammatical IntegritysBLIMP
sBLIMP Accuracy59.8
10
Structural ConsistencysWUGGY
sWUGGY Structural Consistency75.36
8
Audio-to-Text Story ContinuationStoryCloze tSC
Accuracy (tSC)94
5
Text-to-Text Story ContinuationStoryCloze tSC
T2T-tSC Accuracy98
5
Showing 5 of 5 rows

Other info

Follow for update