Speaker-Aware BERT for Multi-Turn Response Selection in Retrieval-Based Chatbots
About
In this paper, we study the problem of employing pre-trained language models for multi-turn response selection in retrieval-based chatbots. A new model, named Speaker-Aware BERT (SA-BERT), is proposed in order to make the model aware of the speaker change information, which is an important and intrinsic property of multi-turn dialogues. Furthermore, a speaker-aware disentanglement strategy is proposed to tackle the entangled dialogues. This strategy selects a small number of most important utterances as the filtered context according to the speakers' information in them. Finally, domain adaptation is performed to incorporate the in-domain knowledge into pre-trained language models. Experiments on five public datasets show that our proposed model outperforms the present models on all metrics by large margins and achieves new state-of-the-art performances for multi-turn response selection.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multi-turn Response Selection | Ubuntu Dialogue Corpus V1 (test) | R10@185.5 | 102 | |
| Response Selection | Douban Conversation Corpus (test) | MAP0.619 | 94 | |
| Response Selection | E-commerce (test) | Recall@1 (R10)0.704 | 81 | |
| Multi-turn Response Selection | E-commerce Dialogue Corpus (test) | R@1 (Top 10 Set)70.4 | 70 | |
| Multi-turn Response Selection | Douban Conversation Corpus | MAP61.9 | 67 | |
| Multi-turn Response Selection | Ubuntu Corpus | Recall@1 (R10)85.5 | 65 | |
| Response Selection | Ubuntu (test) | Recall@1 (Top 10)0.855 | 58 | |
| Abstractive dialogue summarization | SamSum (test) | -- | 53 | |
| Machine Reading Comprehension | Molweni (test) | EM57.9 | 49 | |
| Question Answering | FriendsQA (test) | EM57.8 | 24 |