Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Dynamic Knowledge Fusion for Multi-Domain Dialogue State Tracking

About

The performance of task-oriented dialogue models is strongly tied to how well they track dialogue states, which records and updates user information across multi-turn interactions. However, current multi-domain DST encounters two key challenges: the difficulty of effectively modeling dialogue history and the limited availability of annotated data, both of which hinder model performance. To tackle the aforementioned problems, we develop a dynamic knowledge fusion framework applicable to multi-domain DST. The model operates in two stages: first, an encoder-only network trained with contrastive learning encodes dialogue history and candidate slots, selecting relevant slots based on correlation scores; second, dynamic knowledge fusion leverages the structured information of selected slots as contextual prompts to enhance the accuracy and consistency of dialogue state tracking. This design enables more accurate integration of dialogue context and domain knowledge. Results obtained from multi-domain dialogue benchmarks indicate that our method notably improves both tracking accuracy and generalization, validating its capability in handling complex dialogue scenarios.

Haoxiang Su, Ruiyu Fang, Liting Jiang, Xiaomeng Huang, Shuangyong Song• 2026

Related benchmarks

TaskDatasetResultRank
Dialogue State TrackingMultiWOZ 2.1
Joint Goal Accuracy58.2
36
Dialogue State TrackingMultiWOZ 2.2
Joint GA62.3
9
Dialogue State TrackingMultiWOZ 2.3
Joint Goal Accuracy63.1
9
Dialogue State TrackingMultiWOZ 2.4
Joint Goal Accuracy77.3
8
Showing 4 of 4 rows

Other info

Follow for update