Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SignCLIP: Connecting Text and Sign Language by Contrastive Learning

About

We present SignCLIP, which re-purposes CLIP (Contrastive Language-Image Pretraining) to project spoken language text and sign language videos, two classes of natural languages of distinct modalities, into the same space. SignCLIP is an efficient method of learning useful visual representations for sign language processing from large-scale, multilingual video-text pairs, without directly optimizing for a specific task or sign language which is often of limited size. We pretrain SignCLIP on Spreadthesign, a prominent sign language dictionary consisting of ~500 thousand video clips in up to 44 sign languages, and evaluate it with various downstream datasets. SignCLIP discerns in-domain signing with notable text-to-video/video-to-text retrieval accuracy. It also performs competitively for out-of-domain downstream tasks such as isolated sign language recognition upon essential few-shot prompting or fine-tuning. We analyze the latent space formed by the spoken language text and sign language poses, which provides additional linguistic insights. Our code and models are openly available.

Zifan Jiang, Gerard Sant, Amit Moryossef, Mathias M\"uller, Rico Sennrich, Sarah Ebling• 2024

Related benchmarks

TaskDatasetResultRank
Isolated Sign Language RecognitionASL Citizen (test)
Rec@160
4
Isolated Sign Language RecognitionSem-Lex (test)
Rec@10.3
3
Showing 2 of 2 rows

Other info

Follow for update