Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Linear Script Representations in Speech Foundation Models Enable Zero-Shot Transliteration

About

Multilingual speech foundation models such as Whisper are trained on web-scale data, where data for each language consists of a myriad of regional varieties. However, different regional varieties often employ different scripts to write the same language, rendering speech recognition output also subject to non-determinism in the output script. To mitigate this problem, we show that script is linearly encoded in the activation space of multilingual speech models, and that modifying activations at inference time enables direct control over output script. We find the addition of such script vectors to activations at test time can induce script change even in unconventional language-script pairings (e.g. Italian in Cyrillic and Japanese in Latin script). We apply this approach to inducing post-hoc control over the script of speech recognition output, where we observe competitive performance across all model sizes of Whisper.

Ryan Soh-Eun Shim, Kwanghee Choi, Kalvin Chang, Ming-Hao Hsu, Florian Eichin, Zhizheng Wu, Alane Suhr, Michael A. Hedderich, David Harwath, David R. Mortensen, Barbara Plank• 2026

Related benchmarks

TaskDatasetResultRank
Script confusion mitigationFLEURS sr-latn (test)
Accuracy (Normalized Edit Similarity)96
21
Script confusion mitigationFLEURS sr-cyrl (test)
Normalized Edit Similarity0.94
21
Script confusion mitigationFLEURS zh-trad (test)
Accuracy91
21
Script confusion mitigationFLEURS zh-sim (test)
Normalized Edit Similarity Accuracy93
21
CyrillizationFLEURS Hindi (test)
Accuracy19
4
CyrillizationFLEURS Greek (test)
Accuracy15
4
CyrillizationFLEURS Japanese (test)
Accuracy5
4
CyrillizationFLEURS Korean (test)
Accuracy17
4
CyrillizationFLEURS Italian (test)
Accuracy43
4
RomanizationFLEURS Hindi (test)
Accuracy71
4
Showing 10 of 14 rows

Other info

Follow for update