Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

How to Design a Three-Stage Architecture for Audio-Visual Active Speaker Detection in the Wild

About

Successful active speaker detection requires a three-stage pipeline: (i) audio-visual encoding for all speakers in the clip, (ii) inter-speaker relation modeling between a reference speaker and the background speakers within each frame, and (iii) temporal modeling for the reference speaker. Each stage of this pipeline plays an important role for the final performance of the created architecture. Based on a series of controlled experiments, this work presents several practical guidelines for audio-visual active speaker detection. Correspondingly, we present a new architecture called ASDNet, which achieves a new state-of-the-art on the AVA-ActiveSpeaker dataset with a mAP of 93.5% outperforming the second best with a large margin of 4.7%. Our code and pretrained models are publicly available.

Okan K\"op\"ukl\"u, Maja Taseska, Gerhard Rigoll• 2021

Related benchmarks

TaskDatasetResultRank
Active Speaker DetectionAVA-ActiveSpeaker (val)
mAP93.5
107
Active Speaker DetectionAVA-ActiveSpeaker v1.0 (val)
mAP93.5
27
Active Speaker DetectionAVA-ActiveSpeaker (test)
mAP91.7
22
Active Speaker DetectionAVA-ActiveSpeaker v1.0 (test)
mAP91.9
13
Active Speaker DetectionUniTalk (test)
Overall mAP20.6
10
Active Speaker DetectionWASD (test)
mAP (OC)96.5
9
Active Speaker DetectionAVA-ActiveSpeaker Internal In-Domain (test)
mAP91.1
7
Active Speaker DetectionWASD External/Out-of-Domain (test)
mAP79.2
7
Showing 8 of 8 rows

Other info

Code

Follow for update