Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

T2S-GPT: Dynamic Vector Quantization for Autoregressive Sign Language Production from Text

About

In this work, we propose a two-stage sign language production (SLP) paradigm that first encodes sign language sequences into discrete codes and then autoregressively generates sign language from text based on the learned codebook. However, existing vector quantization (VQ) methods are fixed-length encodings, overlooking the uneven information density in sign language, which leads to under-encoding of important regions and over-encoding of unimportant regions. To address this issue, we propose a novel dynamic vector quantization (DVA-VAE) model that can dynamically adjust the encoding length based on the information density in sign language to achieve accurate and compact encoding. Then, a GPT-like model learns to generate code sequences and their corresponding durations from spoken language text. Extensive experiments conducted on the PHOENIX14T dataset demonstrate the effectiveness of our proposed method. To promote sign language research, we propose a new large German sign language dataset, PHOENIX-News, which contains 486 hours of sign language videos, audio, and transcription texts.Experimental analysis on PHOENIX-News shows that the performance of our model can be further improved by increasing the size of the training data. Our project homepage is https://t2sgpt-demo.yinaoxiong.cn.

Aoxiong Yin, Haoyuan Li, Kai Shen, Siliang Tang, Yueting Zhuang• 2024

Related benchmarks

TaskDatasetResultRank
Sign Language TranslationPHOENIX-2014T (test)
BLEU-411.87
159
Sign Language ProductionHow2Sign
DTW-PA-JPE (Body)11.48
10
Text-to-PoseCSLDaily
DTW-PA-JPE (Body)11.94
8
Text-to-PosePHOENIX-2014T
DTW PA-JPE (Body)10.38
8
Sign Language ProductionCSL-Daily
DTW-PA-JPE (Body)11.94
7
Sign Language ProductionPhoenix 14T
DTW-PA-JPE (Body)10.38
7
Text-to-PoseHow2Sign
DTW PA-JPE (Body)11.48
7
Showing 7 of 7 rows

Other info

Code

Follow for update