Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

PhonMatchNet: Phoneme-Guided Zero-Shot Keyword Spotting for User-Defined Keywords

About

This study presents a novel zero-shot user-defined keyword spotting model that utilizes the audio-phoneme relationship of the keyword to improve performance. Unlike the previous approach that estimates at utterance level, we use both utterance and phoneme level information. Our proposed method comprises a two-stream speech encoder architecture, self-attention-based pattern extractor, and phoneme-level detection loss for high performance in various pronunciation environments. Based on experimental results, our proposed model outperforms the baseline model and achieves competitive performance compared with full-shot keyword spotting models. Our proposed model significantly improves the EER and AUC across all datasets, including familiar words, proper nouns, and indistinguishable pronunciations, with an average relative improvement of 67% and 80%, respectively. The implementation code of our proposed model is available at https://github.com/ncsoft/PhonMatchNet.

Yong-Hyeok Lee, Namhyun Cho• 2023

Related benchmarks

TaskDatasetResultRank
Keyword SpottingGoogle Speech Commands (test)
Accuracy96.8
71
Keyword SpottingLibriPhrase Easy (LPE)
EER2.33
25
Open-vocabulary keyword spottingLibriPhrase easy
EER0.028
11
Zero-shot Keyword SpottingLibriPhrase Hard High phonetic confusion (train-other-500)
AUC88.52
9
Zero-shot Keyword SpottingLibriPhrase Easy (LPE) Low phonetic confusion other-500 (train)
AUC99.29
9
Zero-shot Keyword SpottingGoogle Speech Commands G V2
AUC98.11
6
Zero-shot Keyword SpottingQualcomm Keyword Speech Q (evaluation)
AUC98.9
6
Open-vocabulary keyword spottingLibriPhrase LPH
AUC88.52
5
Text-enrolled Keyword SpottingLibriPhrase hard
EER24.11
5
Keyword SpottingAMI
FAR17.879
5
Showing 10 of 12 rows

Other info

Follow for update