Watch, read and lookup: learning to spot signs from multiple supervisors
About
The focus of this work is sign spotting - given a video of an isolated sign, our task is to identify whether and where it has been signed in a continuous, co-articulated sign language video. To achieve this sign spotting task, we train a model using multiple types of available supervision by: (1) watching existing sparsely labelled footage; (2) reading associated subtitles (readily available translations of the signed content) which provide additional weak-supervision; (3) looking up words (for which no co-articulated labelled examples are available) in visual sign language dictionaries to enable novel sign spotting. These three tasks are integrated into a unified learning framework using the principles of Noise Contrastive Estimation and Multiple Instance Learning. We validate the effectiveness of our approach on low-shot sign spotting benchmarks. In addition, we contribute a machine-readable British Sign Language (BSL) dictionary dataset of isolated signs, BSLDict, to facilitate study of this task. The dataset, models and code are available at our project page.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Sign Recognition | BSL-1K 37K_Rec (test) | Per-Instance Top-1 Acc62.1 | 7 | |
| Sign Recognition | BSL-1K 37K (test) | Top-1 Acc (Instance)60.9 | 3 | |
| Sign Recognition | BSL-1K 2K (test) | Top-1 Accuracy (Instance)70.8 | 3 |