Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Calibration-Reasoning Framework for Descriptive Speech Quality Assessment

About

Explainable speech quality assessment requires moving beyond Mean Opinion Scores (MOS) to analyze underlying perceptual dimensions. To address this, we introduce a novel post-training method that tailors the foundational Audio Large Language Model for multidimensional reasoning, detection and classification of audio artifacts. First, a calibration stage aligns the model to predict predefined perceptual dimensions. Second, a reinforcement learning stage leverages Group Relative Policy Optimization (GRPO) with dimension-specific rewards to heavily enhance accuracy of descriptions and temporal localization of quality issues. With this approach we reach state-of-the-art results of 0.71 mean PCC score on the multidimensional QualiSpeech benchmark and 13% improvement in MOS prediction driven by RL-based reasoning. Furthermore, our fine-grained GRPO rewards substantially advance the model's ability to pinpoint and classify audio artifacts in time.

Elizaveta Kostenok, Mathieu Salzmann, Milos Cernak• 2026

Related benchmarks

TaskDatasetResultRank
Speech Quality AssessmentQualiSpeech (test)
Naturalness (PCC)0.73
8
Brief Audio Artifact CharacterizationQualiSpeech
Noise F177
6
Long-form Audio Quality DescriptionQualiSpeech
ROUGE-L51
6
Showing 3 of 3 rows

Other info

Follow for update