Learning When to Translate for Streaming Speech
About
How to find proper moments to generate partial sentence translation given a streaming speech input? Existing approaches waiting-and-translating for a fixed duration often break the acoustic units in speech, since the boundaries between acoustic units in speech are not even. In this paper, we propose MoSST, a simple yet effective method for translating streaming speech content. Given a usually long speech sequence, we develop an efficient monotonic segmentation module inside an encoder-decoder model to accumulate acoustic information incrementally and detect proper speech unit boundaries for the input in speech translation task. Experiments on multiple translation directions of the MuST-C dataset show that MoSST outperforms existing methods and achieves the best trade-off between translation quality (BLEU) and latency. Our code is available at https://github.com/dqqcasia/mosst.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Speech Translation | MuST-C EN-DE (test-COMMON) | BLEU24.9 | 41 | |
| Simultaneous Speech Translation | MuST-C EN-DE (tst-COMMON) | BLEU20 | 39 | |
| Speech Translation | MuST-C EN-FR COMMON (test) | BLEU35.3 | 17 |