Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SLTUNET: A Simple Unified Model for Sign Language Translation

About

Despite recent successes with neural models for sign language translation (SLT), translation quality still lags behind spoken languages because of the data scarcity and modality gap between sign video and text. To address both problems, we investigate strategies for cross-modality representation sharing for SLT. We propose SLTUNET, a simple unified neural model designed to support multiple SLTrelated tasks jointly, such as sign-to-gloss, gloss-to-text and sign-to-text translation. Jointly modeling different tasks endows SLTUNET with the capability to explore the cross-task relatedness that could help narrow the modality gap. In addition, this allows us to leverage the knowledge from external resources, such as abundant parallel data used for spoken-language machine translation (MT). We show in experiments that SLTUNET achieves competitive and even state-of-the-art performance on PHOENIX-2014T and CSL-Daily when augmented with MT data and equipped with a set of optimization techniques. We further use the DGS Corpus for end-to-end SLT for the first time. It covers broader domains with a significantly larger vocabulary, which is more challenging and which we consider to allow for a more realistic assessment of the current state of SLT than the former two. Still, SLTUNET obtains improved results on the DGS Corpus. Code is available at https://github.com/bzhangGo/sltunet.

Biao Zhang, Mathias M\"uller, Rico Sennrich• 2023

Related benchmarks

TaskDatasetResultRank
Sign Language TranslationPHOENIX-2014T (test)
BLEU-428.47
159
Sign Language TranslationPHOENIX-2014T (dev)
BLEU-4 Score27.87
111
Sign Language TranslationCSL-Daily (test)
BLEU-425.01
99
Sign Language TranslationCSL-Daily (dev)
ROUGE54.98
80
Sign Language TranslationPHOENIX14T (test)
BLEU-428.47
50
Sign Language TranslationCSL-Daily v1 (test)
ROUGE54.08
25
Showing 6 of 6 rows

Other info

Follow for update