Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ARPGNet: Appearance- and Relation-aware Parallel Graph Attention Fusion Network for Facial Expression Recognition

About

The key to facial expression recognition is to learn discriminative spatial-temporal representations that embed facial expression dynamics. Previous studies predominantly rely on pre-trained Convolutional Neural Networks (CNNs) to learn facial appearance representations, overlooking the relationships between facial regions. To address this issue, this paper presents an Appearance- and Relation-aware Parallel Graph attention fusion Network (ARPGNet) to learn mutually enhanced spatial-temporal representations of appearance and relation information. Specifically, we construct a facial region relation graph and leverage the graph attention mechanism to model the relationships between facial regions. The resulting relational representation sequences, along with CNN-based appearance representation sequences, are then fed into a parallel graph attention fusion module for mutual interaction and enhancement. This module simultaneously explores the complementarity between different representation sequences and the temporal dynamics within each sequence. Experimental results on three facial expression recognition datasets demonstrate that the proposed ARPGNet outperforms or is comparable to state-of-the-art methods.

Yan Li, Yong Zhao, Xiaohan Xia, Dongmei Jiang• 2025

Related benchmarks

TaskDatasetResultRank
Facial Expression RecognitionAFEW (test)
Accuracy60.05
35
Facial Expression RecognitionAffWild2 (test)
Accuracy62.8
33
Facial Expression RecognitionRML (test)
Accuracy76.53
17
Showing 3 of 3 rows

Other info

Follow for update