Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Multi-channel Transformers for Multi-articulatory Sign Language Translation

About

Sign languages use multiple asynchronous information channels (articulators), not just the hands but also the face and body, which computational approaches often ignore. In this paper we tackle the multi-articulatory sign language translation task and propose a novel multi-channel transformer architecture. The proposed architecture allows both the inter and intra contextual relationships between different sign articulators to be modelled within the transformer network itself, while also maintaining channel specific information. We evaluate our approach on the RWTH-PHOENIX-Weather-2014T dataset and report competitive translation performance. Importantly, we overcome the reliance on gloss annotations which underpin other state-of-the-art approaches, thereby removing future need for expensive curated datasets.

Necati Cihan Camgoz, Oscar Koller, Simon Hadfield, Richard Bowden• 2020

Related benchmarks

TaskDatasetResultRank
Sign Language TranslationPHOENIX-2014T (test)
BLEU-418.51
159
Sign Language TranslationPHOENIX-2014T (dev)
BLEU-4 Score19.51
111
Showing 2 of 2 rows

Other info

Follow for update