Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Unsupervised End-to-End Task-Oriented Dialogue with LLMs: The Power of the Noisy Channel

About

Training task-oriented dialogue systems typically requires turn-level annotations for interacting with their APIs: e.g. a dialogue state and the system actions taken at each step. These annotations can be costly to produce, error-prone, and require both domain and annotation expertise. With advances in LLMs, we hypothesize that unlabeled data and a schema definition are sufficient for building a working task-oriented dialogue system, completely unsupervised. We consider a novel unsupervised setting of only (1) a well-defined API schema (2) a set of unlabeled dialogues between a user and agent. We propose an innovative approach using expectation-maximization (EM) that infers turn-level annotations as latent variables using a noisy channel model to build an end-to-end dialogue agent. Evaluating our approach on the MultiWOZ benchmark, our method more than doubles the dialogue success rate of a strong GPT-3.5 baseline.

Brendan King, Jeffrey Flanigan• 2024

Related benchmarks

TaskDatasetResultRank
Task-oriented DialogueMultiWOZ 2.4 (test)
JGA39.7
15
Function CallingAPI-Bank Level-2
ROUGE-L4.2
12
Function CallingAPI-Bank Level-1
ROUGE-L3.7
12
Showing 3 of 3 rows

Other info

Follow for update