MAAS: Multi-modal Assignation for Active Speaker Detection
About
Active speaker detection requires a solid integration of multi-modal cues. While individual modalities can approximate a solution, accurate predictions can only be achieved by explicitly fusing the audio and visual features and modeling their temporal progression. Despite its inherent muti-modal nature, current methods still focus on modeling and fusing short-term audiovisual features for individual speakers, often at frame level. In this paper we present a novel approach to active speaker detection that directly addresses the multi-modal nature of the problem, and provides a straightforward strategy where independent visual features from potential speakers in the scene are assigned to a previously detected speech event. Our experiments show that, an small graph data structure built from a single frame, allows to approximate an instantaneous audio-visual assignment problem. Moreover, the temporal extension of this initial graph achieves a new state-of-the-art on the AVA-ActiveSpeaker dataset with a mAP of 88.8\%.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Active Speaker Detection | AVA-ActiveSpeaker (val) | mAP88.8 | 107 | |
| Active Speaker Detection | AVA-ActiveSpeaker v1.0 (val) | mAP88.8 | 27 | |
| Active Speaker Detection | AVA-ActiveSpeaker (test) | mAP88.3 | 22 | |
| Active Speaker Detection | Talkies (val) | mAP79.7 | 14 | |
| Active Speaker Detection | AVA-ActiveSpeaker | mAP88.8 | 11 | |
| Active Speaker Detection | WASD (test) | mAP (OC)90.7 | 9 | |
| Active Speaker Detection | Talkies (test) | mAP79.7 | 8 | |
| Active Speaker Detection | AVA-ActiveSpeaker Internal In-Domain (test) | mAP82 | 7 | |
| Active Speaker Detection | WASD External/Out-of-Domain (test) | mAP70.7 | 7 | |
| Active Speaker Detection | Talkies 1.0 (test) | mAP79.7 | 4 |