Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Joint Speech and Text Training for LLM-Based End-to-End Spoken Dialogue State Tracking

About

End-to-end spoken dialogue state tracking (DST) is made difficult by the tandem of having to handle speech input and data scarcity. Combining speech foundation encoders and large language models has been proposed in recent work as to alleviate some of this difficulty. Although this approach has been shown to result in strong spoken DST models, achieving state-of-the-art performance in realistic multi-turn DST, it struggles to generalize across domains and requires annotated spoken DST training data for each domain of interest. However, collecting such data for every target domain is both costly and difficult. Noting that textual DST data is more easily obtained for various domains, in this work, we propose jointly training on available spoken DST data and written textual data from other domains as a way to achieve cross-domain generalization. We conduct experiments which show the efficacy of our proposed method for getting good cross-domain DST performance without relying on spoken training data from the target domains.

Katia Vendrame, Bolaji Yusuf, Santosh Kesiraju, \v{S}imon Sedl\'a\v{c}ek, Old\v{r}ich Plchot, Jan \v{C}ernock\'y• 2025

Related benchmarks

TaskDatasetResultRank
Dialog State TrackingSpokenWoz (test)
JGA43
28
Spoken Dialogue State TrackingMultiWOZ (test)
Joint Goal Acc32.4
17
Showing 2 of 2 rows

Other info

Follow for update