Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A Unified Framework for Emotion Recognition and Sentiment Analysis via Expert-Guided Multimodal Fusion with Large Language Models

About

Multimodal emotion understanding requires effective integration of text, audio, and visual modalities for both discrete emotion recognition and continuous sentiment analysis. We present EGMF, a unified framework combining expert-guided multimodal fusion with large language models. Our approach features three specialized expert networks--a fine-grained local expert for subtle emotional nuances, a semantic correlation expert for cross-modal relationships, and a global context expert for long-range dependencies--adaptively integrated through hierarchical dynamic gating for context-aware feature selection. Enhanced multimodal representations are integrated with LLMs via pseudo token injection and prompt-based conditioning, enabling a single generative framework to handle both classification and regression through natural language generation. We employ LoRA fine-tuning for computational efficiency. Experiments on bilingual benchmarks (MELD, CHERMA, MOSEI, SIMS-V2) demonstrate consistent improvements over state-of-the-art methods, with superior cross-lingual robustness revealing universal patterns in multimodal emotional expressions across English and Chinese. We will release the source code publicly.

Jiaqi Qiao, Xiujuan Xu, Xinran Li, Yu Liu• 2026

Related benchmarks

TaskDatasetResultRank
Emotion Recognition in ConversationMELD
Weighted Avg F165.57
137
Multimodal Sentiment AnalysisMOSEI
F-Score87.09
22
Emotion Recognition in ConversationCHERMA
Weighted F1 Score73.9
9
Multimodal Sentiment AnalysisSIMS V2
Accuracy (2-class)82.57
7
Showing 4 of 4 rows

Other info

Follow for update