Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

How to Evaluate Speech Translation with Source-Aware Neural MT Metrics

About

Automatic evaluation of ST systems is typically performed by comparing translation hypotheses with one or more reference translations. While effective to some extent, this approach inherits the limitation of reference-based evaluation that ignores valuable information from the source input. In MT, recent progress has shown that neural metrics incorporating the source text achieve stronger correlation with human judgments. Extending this idea to ST, however, is not trivial because the source is audio rather than text, and reliable transcripts or alignments between source and references are often unavailable. In this work, we conduct the first systematic study of source-aware metrics for ST, with a particular focus on real-world operating conditions where source transcripts are not available. We explore two complementary strategies for generating textual proxies of the input audio, ASR transcripts, and back-translations of the reference translation, and introduce a novel two-step cross-lingual re-segmentation algorithm to address the alignment mismatch between synthetic sources and reference translations. Our experiments, carried out on two ST benchmarks covering 79 language pairs and six ST systems with diverse architectures and performance levels, show that ASR transcripts constitute a more reliable synthetic source than back-translations when word error rate is below 20%, while back-translations always represent a computationally cheaper but still effective alternative. The robustness of these findings is further confirmed by experiments on a low-resource language pair (Bemba-English) and by a direct validation against human quality judgments. Furthermore, our cross-lingual re-segmentation algorithm enables robust use of source-aware MT metrics in ST evaluation, paving the way toward more accurate and principled evaluation methodologies for speech translation.

Mauro Cettolo, Marco Gaido, Matteo Negri, Sara Papi, Luisa Bentivogli• 2025

Related benchmarks

TaskDatasetResultRank
Speech Translation EvaluationMust-C
Pearson Correlation0.9971
94
Speech Translation Metric EvaluationEuroparl-ST (test)
Average Correlation0.9674
84
Speech Translation Evaluation CorrelationEuroparl-ST
Pearson Correlation0.9983
70
Automatic Speech RecognitionMust-C
LASER0.8308
21
Automatic Speech RecognitionEuroparl-ST
LASER Score0.883
21
Metric Correlation with Human JudgmentsHearing-to-Translate five language pairs--
15
Synthetic Metric ValidationIWSLT Bemba-English (Prototype) 2025 (test)--
12
Synthetic Metric ValidationIWSLT Bemba-English (JHU@iwslt25) 2025 (test)--
12
Synthetic Metric ValidationIWSLT Bemba-English KIT 2025 (test)--
12
Metric Correlation AnalysisMuST-C (test)
whspr+mdld94.62
10
Showing 10 of 11 rows

Other info

Follow for update