Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

EmoLLM: Appraisal-Grounded Cognitive-Emotional Co-Reasoning in Large Language Models

About

Large language models (LLMs) demonstrate strong cognitive intelligence (IQ), yet many real-world interactions also require emotional intelligence (EQ) to produce responses that are both factually reliable and emotionally appropriate. In settings such as emotional support, technical assistance, and consultation, effective dialogue depends on how situations are appraised with respect to the user's needs, goals, and coping capacity. Inspired by appraisal theory, we propose EmoLLM, an appraisal-grounded framework for IQ/EQ co-reasoning in dialogue. EmoLLM uses an explicit Appraisal Reasoning Graph (ARG) to structure intermediate reasoning over contextual facts, inferred user needs, appraisal dimensions, emotional states, and response strategies before generating a reply. We train EmoLLM in a multi-turn role-play environment with reinforcement learning, where reverse-perspective reasoning provides reward signals based on predicted user-side consequences of responses. Across diverse dialogue settings, EmoLLM improves emotional state outcomes and response quality over strong baselines while preserving strong factual reliability.

Yifei Zhang, Mingyang Li, Henry Gao, Liang Zhao• 2026

Related benchmarks

TaskDatasetResultRank
Multi-turn role-playED
Success Rate (SR)92.1
12
Multi-turn role-playMSD
Success Rate (SR)83.2
12
Multi-turn role-playMedD
Success Rate (SR)95.3
12
Multi-turn role-playICLR
Success Rate (SR)96.2
12
Empathetic DialogueED
Success Rate (SR)92.1
5
Empathetic DialogueMedD
Success Rate (SR)95.3
5
Empathetic DialogueMSD
Success Rate (SR)83.2
5
Empathetic DialogueICLR
Success Rate (SR)96.2
5
Showing 8 of 8 rows

Other info

Follow for update