Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A Multi-modal and Multi-task Learning Method for Action Unit and Expression Recognition

About

Analyzing human affect is vital for human-computer interaction systems. Most methods are developed in restricted scenarios which are not practical for in-the-wild settings. The Affective Behavior Analysis in-the-wild (ABAW) 2021 Contest provides a benchmark for this in-the-wild problem. In this paper, we introduce a multi-modal and multi-task learning method by using both visual and audio information. We use both AU and expression annotations to train the model and apply a sequence model to further extract associations between video frames. We achieve an AU score of 0.712 and an expression score of 0.477 on the validation set. These results demonstrate the effectiveness of our approach in improving model performance.

Yue Jin, Tianqing Zheng, Chao Gao, Guoqiang Xu• 2021

Related benchmarks

TaskDatasetResultRank
Facial Expression RecognitionAffWild2 (test)
Accuracy47.7
33
Showing 1 of 1 rows

Other info

Follow for update