Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Speaker-Aware BERT for Multi-Turn Response Selection in Retrieval-Based Chatbots

About

In this paper, we study the problem of employing pre-trained language models for multi-turn response selection in retrieval-based chatbots. A new model, named Speaker-Aware BERT (SA-BERT), is proposed in order to make the model aware of the speaker change information, which is an important and intrinsic property of multi-turn dialogues. Furthermore, a speaker-aware disentanglement strategy is proposed to tackle the entangled dialogues. This strategy selects a small number of most important utterances as the filtered context according to the speakers' information in them. Finally, domain adaptation is performed to incorporate the in-domain knowledge into pre-trained language models. Experiments on five public datasets show that our proposed model outperforms the present models on all metrics by large margins and achieves new state-of-the-art performances for multi-turn response selection.

Jia-Chen Gu, Tianda Li, Quan Liu, Zhen-Hua Ling, Zhiming Su, Si Wei, Xiaodan Zhu• 2020

Related benchmarks

TaskDatasetResultRank
Multi-turn Response SelectionUbuntu Dialogue Corpus V1 (test)
R10@185.5
102
Response SelectionDouban Conversation Corpus (test)
MAP0.619
94
Response SelectionE-commerce (test)
Recall@1 (R10)0.704
81
Multi-turn Response SelectionE-commerce Dialogue Corpus (test)
R@1 (Top 10 Set)70.4
70
Multi-turn Response SelectionDouban Conversation Corpus
MAP61.9
67
Multi-turn Response SelectionUbuntu Corpus
Recall@1 (R10)85.5
65
Response SelectionUbuntu (test)
Recall@1 (Top 10)0.855
58
Abstractive dialogue summarizationSamSum (test)--
53
Machine Reading ComprehensionMolweni (test)
EM57.9
49
Question AnsweringFriendsQA (test)
EM57.8
24
Showing 10 of 34 rows

Other info

Code

Follow for update