Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Improving End-to-End Speech-to-Intent Classification with Reptile

About

End-to-end spoken language understanding (SLU) systems have many advantages over conventional pipeline systems, but collecting in-domain speech data to train an end-to-end system is costly and time consuming. One question arises from this: how to train an end-to-end SLU with limited amounts of data? Many researchers have explored approaches that make use of other related data resources, typically by pre-training parts of the model on high-resource speech recognition. In this paper, we suggest improving the generalization performance of SLU models with a non-standard learning algorithm, Reptile. Though Reptile was originally proposed for model-agnostic meta learning, we argue that it can also be used to directly learn a target task and result in better generalization than conventional gradient descent. In this work, we employ Reptile to the task of end-to-end spoken intent classification. Experiments on four datasets of different languages and domains show improvement of intent prediction accuracy, both when Reptile is used alone and used in addition to pre-training.

Yusheng Tian, Philip John Gorinski• 2020

Related benchmarks

TaskDatasetResultRank
Spoken Language UnderstandingFluent Speech Commands (test)
Accuracy99.1
12
Intent PredictionFSC
Accuracy99.2
6
Intent PredictionTamil
Accuracy94.5
6
Intent PredictionGrabo
Accuracy98.9
5
Intent PredictionCN
Accuracy55.2
4
Showing 5 of 5 rows

Other info

Follow for update