Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

BERT-DST: Scalable End-to-End Dialogue State Tracking with Bidirectional Encoder Representations from Transformer

About

An important yet rarely tackled problem in dialogue state tracking (DST) is scalability for dynamic ontology (e.g., movie, restaurant) and unseen slot values. We focus on a specific condition, where the ontology is unknown to the state tracker, but the target slot value (except for none and dontcare), possibly unseen during training, can be found as word segment in the dialogue context. Prior approaches often rely on candidate generation from n-gram enumeration or slot tagger outputs, which can be inefficient or suffer from error propagation. We propose BERT-DST, an end-to-end dialogue state tracker which directly extracts slot values from the dialogue context. We use BERT as dialogue context encoder whose contextualized language representations are suitable for scalable DST to identify slot values from their semantic context. Furthermore, we employ encoder parameter sharing across all slots with two advantages: (1) Number of parameters does not grow linearly with the ontology. (2) Language representation knowledge can be transferred among slots. Empirical evaluation shows BERT-DST with cross-slot parameter sharing outperforms prior work on the benchmark scalable DST datasets Sim-M and Sim-R, and achieves competitive performance on the standard DSTC2 and WOZ 2.0 datasets.

Guan-Lin Chao, Ian Lane• 2019

Related benchmarks

TaskDatasetResultRank
Dialogue State TrackingWOZ 2.0 (test)
Joint Goal Accuracy87.7
65
Dialogue State TrackingWOZ 2.0
Joint GA87.7
7
Dialogue State TrackingM2M Simulated Restaurant (test)
Joint Goal Accuracy89.6
6
Dialogue State TrackingM2M Simulated Movie (test)
Joint Goal Accuracy80.1
6
Dialogue State TrackingDSTC2
Joint GA69.3
5
Dialogue State TrackingM2M
Joint GA86.9
3
Showing 6 of 6 rows

Other info

Follow for update