Redefining Machine Simultaneous Interpretation: From Incremental Translation to Human-Like Strategies
About
Simultaneous Machine Translation (SiMT) requires high-quality translations under strict real-time constraints, which traditional policies with only READ/WRITE actions cannot fully address. We extend the action space of SiMT with four adaptive actions: Sentence_Cut, Drop, Partial_Summarization and Pronominalization, which enable real-time restructuring, omission, and simplification while preserving semantic fidelity. We adapt these actions in a large language model (LLM) framework and construct training references through action-aware prompting. To evaluate both quality and word-level monotonicity, we further develop a latency-aware TTS pipeline that maps textual outputs to speech with realistic timing. Experiments on the ACL60/60 English-Chinese, English-German and English-Japanese benchmarks show that our framework consistently improves semantic metrics and achieves lower delay compared to reference translations and salami-based baselines. Notably, combining Drop and Sentence_Cut leads to consistent improvements in the balance between fluency and latency. These results demonstrate that enriching the action space of LLM-based SiMT provides a promising direction for bridging the gap between human and machine interpretation.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Simultaneous Machine Translation | ACL60/60 En-De (eval) | BLEU49.97 | 20 | |
| Simultaneous Machine Translation | ACL60/60 English-Japanese (En-Ja) (val) | BLEU55.33 | 10 | |
| Simultaneous Machine Translation | ACL60/60 En-Zh (eval) | BLEU62.84 | 10 | |
| Simultaneous Machine Translation | ACL60/60 En-Ja (eval) | BLEU55.33 | 10 |