Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ASDnB: Merging Face with Body Cues For Robust Active Speaker Detection

About

State-of-the-art Active Speaker Detection (ASD) approaches mainly use audio and facial features as input. However, the main hypothesis in this paper is that body dynamics is also highly correlated to "speaking" (and "listening") actions and should be particularly useful in wild conditions (e.g., surveillance settings), where face cannot be reliably accessed. We propose ASDnB, a model that singularly integrates face with body information by merging the inputs at different steps of feature extraction. Our approach splits 3D convolution into 2D and 1D to reduce computation cost without loss of performance, and is trained with adaptive weight feature importance for improved complement of face with body data. Our experiments show that ASDnB achieves state-of-the-art results in the benchmark dataset (AVA-ActiveSpeaker), in the challenging data of WASD, and in cross-domain settings using Columbia. This way, ASDnB can perform in multiple settings, which is positively regarded as a strong baseline for robust ASD models (code available at https://github.com/Tiago-Roxo/ASDnB).

Tiago Roxo, Joana C. Costa, Pedro In\'acio, Hugo Proen\c{c}a• 2024

Related benchmarks

TaskDatasetResultRank
Active Speaker DetectionWASD (test)
mAP (OC)98.7
9
Showing 1 of 1 rows

Other info

Follow for update