Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

On the Fallacy of Global Token Perplexity in Spoken Language Model Evaluation

About

Generative spoken language models pretrained on large-scale raw audio can continue a speech prompt with appropriate content while preserving attributes like speaker and emotion, serving as foundation models for spoken dialogue. In prior literature, these models are often evaluated using ``global token perplexity'', which directly applies the text perplexity formulation to speech tokens. However, this practice overlooks fundamental differences between speech and text modalities, possibly leading to an underestimation of the speech characteristics. In this work, we propose a variety of likelihood- and generative-based evaluation methods that serve in place of naive global token perplexity. We demonstrate that the proposed evaluations more faithfully reflect perceived generation quality, as evidenced by stronger correlations with human-rated mean opinion scores (MOS). When assessed under the new metrics, the relative performance landscape of spoken language models is reshaped, revealing a significantly reduced gap between the best-performing model and the human topline. Together, these results suggest that appropriate evaluation is critical for accurately assessing progress in spoken language modeling.

Jeff Chan-Jan Sju, Liang-Hsuan Tseng, Yi-Cheng Lin, Yen-Chun Kuo, Ju-Chieh Chou, Kai-Wei Chang, Hung-yi Lee, Carlos Busso• 2026

Related benchmarks

TaskDatasetResultRank
Acoustic ConsistencySALMon
Sentiment Consistency95.5
61
Acoustic ConsistencySALMon (continuation)
Sentiment Consistency98
25
Semantic-Acoustic AlignmentSALMon
Sentiment Score59
25
Speech GenerationSALMon (human evaluation)
Sentiment Score3.86
8
Showing 4 of 4 rows

Other info

Follow for update