Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Template-assisted Contrastive Learning of Task-oriented Dialogue Sentence Embeddings

About

Learning high quality sentence embeddings from dialogues has drawn increasing attentions as it is essential to solve a variety of dialogue-oriented tasks with low annotation cost. Annotating and gathering utterance relationships in conversations are difficult, while token-level annotations, \eg, entities, slots and templates, are much easier to obtain. Other sentence embedding methods are usually sentence-level self-supervised frameworks and cannot utilize token-level extra knowledge. We introduce Template-aware Dialogue Sentence Embedding (TaDSE), a novel augmentation method that utilizes template information to learn utterance embeddings via self-supervised contrastive learning framework. We further enhance the effect with a synthetically augmented dataset that diversifies utterance-template association, in which slot-filling is a preliminary step. We evaluate TaDSE performance on five downstream benchmark dialogue datasets. The experiment results show that TaDSE achieves significant improvements over previous SOTA methods for dialogue. We further introduce a novel analytic instrument of semantic compression test, for which we discover a correlation with uniformity and alignment. Our code is available at https://github.com/minsik-ai/Template-Contrastive-Embedding

Minsik Oh, Jiwei Li, Guoyin Wang• 2023

Related benchmarks

TaskDatasetResultRank
Intent DetectionATIS
ID Accuracy89.7
32
Intent ClassificationSNIPS (unsupervised)
Accuracy97
9
Intent ClassificationATIS (unsupervised)
Accuracy89.7
9
Intent ClassificationMASSIVE (unsupervised)
Accuracy79.15
9
Intent ClassificationHWU64 (unsupervised)
Accuracy82.77
9
Intent ClassificationClinc150 (unsupervised)
Accuracy72.49
9
Intent ClassificationSNIPS
Accuracy97
5
Showing 7 of 7 rows

Other info

Follow for update