Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Diverse Retrieval-Augmented In-Context Learning for Dialogue State Tracking

About

There has been significant interest in zero and few-shot learning for dialogue state tracking (DST) due to the high cost of collecting and annotating task-oriented dialogues. Recent work has demonstrated that in-context learning requires very little data and zero parameter updates, and even outperforms trained methods in the few-shot setting (Hu et al. 2022). We propose RefPyDST, which advances the state of the art with three advancements to in-context learning for DST. First, we formulate DST as a Python programming task, explicitly modeling language coreference as variable reference in Python. Second, since in-context learning depends highly on the context examples, we propose a method to retrieve a diverse set of relevant examples to improve performance. Finally, we introduce a novel re-weighting method during decoding that takes into account probabilities of competing surface forms, and produces a more accurate dialogue state prediction. We evaluate our approach using MultiWOZ and achieve state-of-the-art multi-domain joint-goal accuracy in zero and few-shot settings.

Brendan King, Jeffrey Flanigan• 2023

Related benchmarks

TaskDatasetResultRank
Dialogue State TrackingMultiWOZ 2.1 (test)--
85
Dialogue State TrackingMultiWOZ 2.4 (test)
Joint Goal Acc65.2
45
Showing 2 of 2 rows

Other info

Follow for update