Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SignDiff: Diffusion Model for American Sign Language Production

About

In this paper, we propose a dual-condition diffusion pre-training model named SignDiff that can generate human sign language speakers from a skeleton pose. SignDiff has a novel Frame Reinforcement Network called FR-Net, similar to dense human pose estimation work, which enhances the correspondence between text lexical symbols and sign language dense pose frames, reduces the occurrence of multiple fingers in the diffusion model. In addition, we propose a new method for American Sign Language Production (ASLP), which can generate ASL skeletal pose videos from text input, integrating two new improved modules and a new loss function to improve the accuracy and quality of sign language skeletal posture and enhance the ability of the model to train on large-scale data. We propose the first baseline for ASL production and report the scores of 17.19 and 12.85 on BLEU-4 on the How2Sign dev/test sets. We evaluated our model on the previous mainstream dataset PHOENIX14T, and the experiments achieved the SOTA results. In addition, our image quality far exceeds all previous results by 10 percentage points in terms of SSIM.

Sen Fang, Chunyu Sui, Yanghao Zhou, Xuedong Zhang, Hongbin Zhong, Yapeng Tian, Chen Chen• 2023

Related benchmarks

TaskDatasetResultRank
Sign Language ProductionHow2Sign (test)
BLEU-412.85
14
Sign Language Video GenerationASL production dataset
SSIM0.849
7
Sign Language ProductionHow2Sign (dev)
BLEU-417.19
6
Sign language generationPrompt2Sign (test)
BLEU-139.46
5
Showing 4 of 4 rows

Other info

Follow for update