Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning Long-Term Spatial-Temporal Graphs for Active Speaker Detection

About

Active speaker detection (ASD) in videos with multiple speakers is a challenging task as it requires learning effective audiovisual features and spatial-temporal correlations over long temporal windows. In this paper, we present SPELL, a novel spatial-temporal graph learning framework that can solve complex tasks such as ASD. To this end, each person in a video frame is first encoded in a unique node for that frame. Nodes corresponding to a single person across frames are connected to encode their temporal dynamics. Nodes within a frame are also connected to encode inter-person relationships. Thus, SPELL reduces ASD to a node classification task. Importantly, SPELL is able to reason over long temporal contexts for all nodes without relying on computationally expensive fully connected graph neural networks. Through extensive experiments on the AVA-ActiveSpeaker dataset, we demonstrate that learning graph-based representations can significantly improve the active speaker detection performance owing to its explicit spatial and temporal structure. SPELL outperforms all previous state-of-the-art approaches while requiring significantly lower memory and computational resources. Our code is publicly available at https://github.com/SRA2/SPELL

Kyle Min, Sourya Roy, Subarna Tripathi, Tanaya Guha, Somdeb Majumdar• 2022

Related benchmarks

TaskDatasetResultRank
Active Speaker DetectionAVA-ActiveSpeaker (val)
mAP94.9
107
Active Speaker DetectionAVA-ActiveSpeaker v1.0 (val)
mAP94.2
27
Active Speaker DetectionAVA-ActiveSpeaker
mAP94.2
11
Active Speaker DetectionEgo4D Audio-Visual benchmark
mAP60.7
9
Showing 4 of 4 rows

Other info

Code

Follow for update