Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Cross-modal Supervision for Learning Active Speaker Detection in Video

About

In this paper, we show how to use audio to supervise the learning of active speaker detection in video. Voice Activity Detection (VAD) guides the learning of the vision-based classifier in a weakly supervised manner. The classifier uses spatio-temporal features to encode upper body motion - facial expressions and gesticulations associated with speaking. We further improve a generic model for active speaker detection by learning person specific models. Finally, we demonstrate the online adaptation of generic models learnt on one dataset, to previously unseen people in a new dataset, again using audio (VAD) for weak supervision. The use of temporal continuity overcomes the lack of clean training data. We are the first to present an active speaker detection system that learns on one audio-visual dataset and automatically adapts to speakers in a new dataset. This work can be seen as an example of how the availability of multi-modal data allows us to learn a model without the need for supervision, by transferring knowledge from one modality to another.

Punarjay Chakravarty, Tinne Tuytelaars• 2016

Related benchmarks

TaskDatasetResultRank
Active Speaker DetectionColumbia dataset
Weighted F1 (Bell)82.9
9
Active Speaker DetectionColumbia
F1 (Bell)0.829
7
Showing 2 of 2 rows

Other info

Follow for update