Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Gloss Attention for Gloss-free Sign Language Translation

About

Most sign language translation (SLT) methods to date require the use of gloss annotations to provide additional supervision information, however, the acquisition of gloss is not easy. To solve this problem, we first perform an analysis of existing models to confirm how gloss annotations make SLT easier. We find that it can provide two aspects of information for the model, 1) it can help the model implicitly learn the location of semantic boundaries in continuous sign language videos, 2) it can help the model understand the sign language video globally. We then propose \emph{gloss attention}, which enables the model to keep its attention within video segments that have the same semantics locally, just as gloss helps existing models do. Furthermore, we transfer the knowledge of sentence-to-sentence similarity from the natural language model to our gloss attention SLT network (GASLT) to help it understand sign language videos at the sentence level. Experimental results on multiple large-scale sign language datasets show that our proposed GASLT model significantly outperforms existing methods. Our code is provided in \url{https://github.com/YinAoXiong/GASLT}.

Aoxiong Yin, Tianyun Zhong, Li Tang, Weike Jin, Tao Jin, Zhou Zhao• 2023

Related benchmarks

TaskDatasetResultRank
Sign Language TranslationPHOENIX-2014T (test)
BLEU-415.74
159
Sign Language TranslationCSL-Daily (test)
BLEU-44.07
99
Sign Language TranslationCSL-Daily (dev)
ROUGE20.51
80
Sign Language TranslationPHOENIX14T (test)
BLEU-415.74
50
Sign Language RecognitionPHOENIX-2014T (test)
WER0.6174
41
Sign Language TranslationCSL-Daily v1 (test)
ROUGE20.35
25
Sign Language TranslationCSL-Daily--
9
Representation DensityPHOENIX-2014T
SDR83.33
5
Sign Language TranslationPHOENIX-2014T
Joint-SLT14.17
5
Sign Language TranslationSP-10
ROUGE-L16.98
3
Showing 10 of 10 rows

Other info

Code

Follow for update