Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Video-guided Machine Translation with Global Video Context

About

Video-guided Multimodal Translation (VMT) has advanced significantly in recent years. However, most existing methods rely on locally aligned video segments paired one-to-one with subtitles, limiting their ability to capture global narrative context across multiple segments in long videos. To overcome this limitation, we propose a globally video-guided multimodal translation framework that leverages a pretrained semantic encoder and vector database-based subtitle retrieval to construct a context set of video segments closely related to the target subtitle semantics. An attention mechanism is employed to focus on highly relevant visual content, while preserving the remaining video features to retain broader contextual information. Furthermore, we design a region-aware cross-modal attention mechanism to enhance semantic alignment during translation. Experiments on a large-scale documentary translation dataset demonstrate that our method significantly outperforms baseline models, highlighting its effectiveness in long-video scenarios.

Jian Chen, JinZe Lv, Zi Long, XiangHua Fu• 2026

Related benchmarks

TaskDatasetResultRank
Multimodal Machine TranslationTopicVD (test)
BLEU30.47
5
Video-guided Multimodal TranslationBigVideo (subset)
BLEU46.78
3
Showing 2 of 2 rows

Other info

Follow for update