Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MAAS: Multi-modal Assignation for Active Speaker Detection

About

Active speaker detection requires a solid integration of multi-modal cues. While individual modalities can approximate a solution, accurate predictions can only be achieved by explicitly fusing the audio and visual features and modeling their temporal progression. Despite its inherent muti-modal nature, current methods still focus on modeling and fusing short-term audiovisual features for individual speakers, often at frame level. In this paper we present a novel approach to active speaker detection that directly addresses the multi-modal nature of the problem, and provides a straightforward strategy where independent visual features from potential speakers in the scene are assigned to a previously detected speech event. Our experiments show that, an small graph data structure built from a single frame, allows to approximate an instantaneous audio-visual assignment problem. Moreover, the temporal extension of this initial graph achieves a new state-of-the-art on the AVA-ActiveSpeaker dataset with a mAP of 88.8\%.

Juan Le\'on-Alc\'azar, Fabian Caba Heilbron, Ali Thabet, Bernard Ghanem• 2021

Related benchmarks

TaskDatasetResultRank
Active Speaker DetectionAVA-ActiveSpeaker (val)
mAP88.8
107
Active Speaker DetectionAVA-ActiveSpeaker v1.0 (val)
mAP88.8
27
Active Speaker DetectionAVA-ActiveSpeaker (test)
mAP88.3
22
Active Speaker DetectionTalkies (val)
mAP79.7
14
Active Speaker DetectionAVA-ActiveSpeaker
mAP88.8
11
Active Speaker DetectionWASD (test)
mAP (OC)90.7
9
Active Speaker DetectionTalkies (test)
mAP79.7
8
Active Speaker DetectionAVA-ActiveSpeaker Internal In-Domain (test)
mAP82
7
Active Speaker DetectionWASD External/Out-of-Domain (test)
mAP70.7
7
Active Speaker DetectionTalkies 1.0 (test)
mAP79.7
4
Showing 10 of 10 rows

Other info

Code

Follow for update