Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

FEEL: Quantifying Heterogeneity in Physiological Signals for Generalizable Emotion Recognition

About

Emotion recognition from physiological signals has substantial potential for applications in mental health and emotion-aware systems. However, the lack of standardized, large-scale evaluations across heterogeneous datasets limits progress and model generalization. We introduce FEEL, the first large-scale benchmarking study of emotion recognition using electrodermal activity (EDA) and photoplethysmography (PPG) signals across 19 publicly available datasets. We evaluate 16 architectures spanning traditional machine learning, deep learning, and self-supervised pretraining approaches, structured into four representative modeling paradigms. Our study includes both within-dataset and cross-dataset evaluations, analyzing generalization across variations in experimental settings, device types, and labeling strategies. Our results showed that fine-tuned contrastive signal-language pretraining (CLSP) models (71/114) achieve the highest F1 across arousal and valence classification tasks, while simpler models like Random Forests, LDA, and MLP remain competitive (36/114). Models leveraging handcrafted features (107/114) consistently outperform those trained on raw signal segments, underscoring the value of domain knowledge in low-resource, noisy settings. Further cross-dataset analyses reveal that models trained on real-life setting data generalize well to lab (F1 = 0.79) and constraint-based settings (F1 = 0.78). Similarly, models trained on expert-annotated data transfer effectively to stimulus-labeled (F1 = 0.72) and self-reported datasets (F1 = 0.76). Moreover, models trained on lab-based devices also demonstrated high transferability to both custom wearable devices (F1 = 0.81) and the Empatica E4 (F1 = 0.73), underscoring the influence of heterogeneity. More information about FEEL can be found on our website https://alchemy18.github.io/FEEL_Benchmark/.

Pragya Singh, Ankush Gupta, Somay Jalan, Mohan Kumar, Pushpendra Singh• 2026

Related benchmarks

TaskDatasetResultRank
Valence classificationNURSE
F1 Score62
6
Valence classificationEMOGNITION
F1 Score60.1
6
Valence classificationEmoWear
F1 Score78
6
Valence classificationCEAP-360VR
F1 Score62
6
Valence classificationLAUREATE
F1 Score52.7
6
Four-class classificationUBFC_PHYS
F1 Score70.5
3
Four-class classificationPhyMER
F1 Score72.3
3
Four-class classificationUnobtrusive
F1 Score40.9
3
Four-class classificationScientISST MOVE
F1 Score80
3
Four-class classificationADARP
F1 Score43.3
3
Showing 10 of 52 rows

Other info

Follow for update