Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SoulChat: Improving LLMs' Empathy, Listening, and Comfort Abilities through Fine-tuning with Multi-turn Empathy Conversations

About

Large language models (LLMs) have been widely applied in various fields due to their excellent capability for memorizing knowledge and chain of thought (CoT). When these language models are applied in the field of psychological counseling, they often rush to provide universal advice. However, when users seek psychological support, they need to gain empathy, trust, understanding and comfort, rather than just reasonable advice. To this end, we constructed a multi-turn empathetic conversation dataset of more than 2 million samples, in which the input is the multi-turn conversation context, and the target is empathetic responses that cover expressions such as questioning, comfort, recognition, listening, trust, emotional support, etc. Experiments have shown that the empathy ability of LLMs can be significantly enhanced when finetuning by using multi-turn dialogue history and responses that are closer to the expression of a psychological consultant.

Yirong Chen, Xiaofen Xing, Jingkai Lin, Huimin Zheng, Zhenyu Wang, Qi Liu, Xiangmin Xu• 2023

Related benchmarks

TaskDatasetResultRank
Emotional Support ConversationESC-Eval
Diversity3
24
Emotional Support ConversationESConv
BERT-SCORE85.75
24
Emotional Support ConversationEmoHarbor
Problem Resolution1.88
24
CBT Conversation GenerationEmoLLM (test)
Bert Score0.9
20
Multi-turn Psychological Counseling Dialogue GenerationCPsyCoun
ROUGE-128.93
16
Emotional Support ConversationSAGE (test)
Sentience44.38
14
CBT Conversation GenerationCBT conversation evaluation dataset
Semantic Coherence1.75
10
Emotional Outcome EvaluationPANAS Positive Attitude
Positive Affect Shift1.94
6
Emotional Outcome EvaluationPANAS Neutral Attitude
Positive Affect Shift0.88
6
Clinical Competence AssessmentCTRS
Understanding4.32
6
Showing 10 of 13 rows

Other info

Follow for update