Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

An Improved Model for Voicing Silent Speech

About

In this paper, we present an improved model for voicing silent speech, where audio is synthesized from facial electromyography (EMG) signals. To give our model greater flexibility to learn its own input features, we directly use EMG signals as input in the place of hand-designed features used by prior work. Our model uses convolutional layers to extract features from the signals and Transformer layers to propagate information across longer distances. To provide better signal for learning, we also introduce an auxiliary task of predicting phoneme labels in addition to predicting speech audio features. On an open vocabulary intelligibility evaluation, our model improves the state of the art for this task by an absolute 25.8%.

David Gaddy, Dan Klein• 2021

Related benchmarks

TaskDatasetResultRank
Silent Speech RecognitionEMG open-vocabulary 2020 (test)
WER42.2
5
Showing 1 of 1 rows

Other info

Code

Follow for update