Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Structural Pre-training for Dialogue Comprehension

About

Pre-trained language models (PrLMs) have demonstrated superior performance due to their strong ability to learn universal language representations from self-supervised pre-training. However, even with the help of the powerful PrLMs, it is still challenging to effectively capture task-related knowledge from dialogue texts which are enriched by correlations among speaker-aware utterances. In this work, we present SPIDER, Structural Pre-traIned DialoguE Reader, to capture dialogue exclusive features. To simulate the dialogue-like features, we propose two training objectives in addition to the original LM objectives: 1) utterance order restoration, which predicts the order of the permuted utterances in dialogue context; 2) sentence backbone regularization, which regularizes the model to improve the factual correctness of summarized subject-verb-object triplets. Experimental results on widely used dialogue benchmarks verify the effectiveness of the newly introduced self-supervised tasks.

Zhuosheng Zhang, Hai Zhao• 2021

Related benchmarks

TaskDatasetResultRank
Response SelectionE-commerce (test)
Recall@1 (R10)0.708
81
Multi-turn Response SelectionDouban Conversation Corpus
MAP60.9
67
Multi-turn Response SelectionUbuntu Corpus
Recall@1 (R10)86.9
65
Multi-turn Dialogue ReasoningMuTual (test)
MRR0.956
19
Extractive Question AnsweringMolweni (test)
EM48.69
14
Emotion PredictionSNEP-Twitter (test)
AUC81.98
14
Discourse ParsingDiscourse Parsing (test)
F1 (RL)62.79
14
Emotion PredictionSNEP-Reddit (test)
AUC64.88
14
Showing 8 of 8 rows

Other info

Follow for update