Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DistilMOS: Layer-Wise Self-Distillation For Self-Supervised Learning Model-Based MOS Prediction

About

With the advancement of self-supervised learning (SSL), fine-tuning pretrained SSL models for mean opinion score (MOS) prediction has achieved state-of-the-art performance. However, during fine-tuning, these SSL-based MOS prediction models often suffer from catastrophic forgetting of the pretrained knowledge and tend to overfit the training set, resulting in poor generalization performance. In this study, we propose DistilMOS, a novel method that learns to predict not only MOS but also token IDs obtained by clustering the hidden representations of each layer in the pretrained SSL model. These layer-wise token targets serve as self-distillation signals that enables the MOS prediction model to extract rich internal knowledge from SSL models, enhancing both prediction accuracy and generalization capability. Experimental evaluations demonstrate that our method significantly outperforms standard SSL-based MOS prediction models on both in-domain and out-of-domain evaluations, verifying the effectiveness and practicality of the proposed method.

Jianing Yang, Wataru Nakata, Yuki Saito, Hiroshi Saruwatari• 2026

Related benchmarks

TaskDatasetResultRank
Mean Opinion Score PredictionBVCC synthetic (test)
LCC0.939
32
Mean Opinion Score PredictionSOMOS large-scale (test)
LCC0.404
16
Showing 2 of 2 rows

Other info

Follow for update