Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

HOMURA: Taming the Sand-Glass for Time-Constrained LLM Translation via Reinforcement Learning

About

Large Language Models (LLMs) have achieved remarkable strides in multilingual translation but are hindered by a systemic cross-lingual verbosity bias, rendering them unsuitable for strict time-constrained tasks like subtitling and dubbing. Current prompt-engineering approaches struggle to resolve this conflict between semantic fidelity and rigid temporal feasibility. To bridge this gap, we first introduce Sand-Glass, a benchmark specifically designed to evaluate translation under syllable-level duration constraints. Furthermore, we propose HOMURA, a reinforcement learning framework that explicitly optimizes the trade-off between semantic preservation and temporal compliance. By employing a KL-regularized objective with a novel dynamic syllable-ratio reward, HOMURA effectively "tames" the output length. Experimental results demonstrate that our method significantly outperforms strong LLM baselines, achieving precise length control that respects linguistic density hierarchies without compromising semantic adequacy.

Ziang Cui, Mengran Yu, Tianjiao Li, Chenyu Shi, Yingxuan Shi, Lusheng Zhang, Hongwei Lin• 2026

Related benchmarks

TaskDatasetResultRank
Constrained Machine TranslationSand-Glass Zh-En (test)
Cometkiwi0.701
18
Constrained Machine TranslationSand-Glass Zh-Es (test)
Cometkiwi0.62
18
Constrained Machine TranslationSand-Glass Zh-De (test)
COMETkiwi Score0.603
18
TranslationSand-Glass Zh-to-En (human evaluation)
Accuracy4.91
14
Showing 4 of 4 rows

Other info

Follow for update