VLLMs Provide Better Context for Emotion Understanding Through Common Sense Reasoning
About
Recognising emotions in context involves identifying an individual's apparent emotions while considering contextual cues from the surrounding scene. Previous approaches to this task have typically designed explicit scene-encoding architectures or incorporated external scene-related information, such as captions. However, these methods often utilise limited contextual information or rely on intricate training pipelines to decouple noise from relevant information. In this work, we leverage the capabilities of Vision-and-Large-Language Models (VLLMs) to enhance in-context emotion classification in a more straightforward manner. Our proposed method follows a simple yet effective two-stage approach. First, we prompt VLLMs to generate natural language descriptions of the subject's apparent emotion in relation to the visual context. Second, the descriptions, along with the visual input, are used to train a transformer-based architecture that fuses text and visual features before the final classification task. This method not only simplifies the training process but also significantly improves performance. Experimental results demonstrate that the textual descriptions effectively guide the model to constrain the noisy visual input, allowing our fused architecture to outperform individual modalities. Our approach achieves state-of-the-art performance across three datasets, BoLD, EMOTIC, and CAER-S, without bells and whistles. The code will be made publicly available on github: https://github.com/NickyFot/EmoCommonSense.git
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Emotion Recognition | CAER-S | Accuracy93.08 | 17 | |
| In-context Human Emotion Recognition | EMOTIC 1.0 (test) | mAP38.52 | 10 | |
| Emotion Recognition | BoLD | mAP26.66 | 8 |