Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Disjoint Mapping Network for Cross-modal Matching of Voices and Faces

About

We propose a novel framework, called Disjoint Mapping Network (DIMNet), for cross-modal biometric matching, in particular of voices and faces. Different from the existing methods, DIMNet does not explicitly learn the joint relationship between the modalities. Instead, DIMNet learns a shared representation for different modalities by mapping them individually to their common covariates. These shared representations can then be used to find the correspondences between the modalities. We show empirically that DIMNet is able to achieve better performance than other current methods, with the additional benefits of being conceptually simpler and less data-intensive.

Yandong Wen, Mahmoud Al Ismail, Weiyang Liu, Bhiksha Raj, Rita Singh• 2018

Related benchmarks

TaskDatasetResultRank
Cross-modal verificationVoxCeleb1 (Unseen-Unheard)--
13
1:2 MatchingVoice-Face Unrestricted
Accuracy81.3
9
1:2 MatchingVoice-Face F-V Unrestricted
Accuracy0.819
9
1:2 MatchingVoice-Face Gender-restricted
Accuracy70.6
9
1:2 MatchingVoice-Face F-V, Gender-restricted
Accuracy69.9
9
RetrievalVoice-Face (F-V)
mAP3.8
8
VerificationVoice-Face Gender-restricted
AUC0.704
8
RetrievalVoice-Face (V-F)
mAP4.3
8
VerificationVoice-Face Unrestricted
AUC81
8
VerificationVoice-Face F-V Unrestricted
AUC81.2
8
Showing 10 of 11 rows

Other info

Follow for update