Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Multiclass Language Identification using Deep Learning on Spectral Images of Audio Signals

About

The first step in any voice recognition software is to determine what language a speaker is using, and ideally this process would be automated. The technique described in this paper, language identification for audio spectrograms (LIFAS), uses spectrograms generated from audio signals as inputs to a convolutional neural network (CNN) to be used for language identification. LIFAS requires minimal pre-processing on the audio signals as the spectrograms are generated during each batch as they are input to the network during training. LIFAS utilizes deep learning tools that are shown to be successful on image processing tasks and applies it to audio signal classification. LIFAS performs binary language classification with an accuracy of 97\%, and multi-class classification with six languages at an accuracy of 89\% on 3.75 second audio clips.

Shauna Revay, Matthew Teschke• 2019

Related benchmarks

TaskDatasetResultRank
Spoken Language IdentificationVoxForge
Accuracy89
7
Showing 1 of 1 rows

Other info

Follow for update