Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Anticipative Feature Fusion Transformer for Multi-Modal Action Anticipation

About

Although human action anticipation is a task which is inherently multi-modal, state-of-the-art methods on well known action anticipation datasets leverage this data by applying ensemble methods and averaging scores of unimodal anticipation networks. In this work we introduce transformer based modality fusion techniques, which unify multi-modal data at an early stage. Our Anticipative Feature Fusion Transformer (AFFT) proves to be superior to popular score fusion approaches and presents state-of-the-art results outperforming previous methods on EpicKitchens-100 and EGTEA Gaze+. Our model is easily extensible and allows for adding new modalities without architectural changes. Consequently, we extracted audio features on EpicKitchens-100 which we add to the set of commonly used features in the community.

Zeyun Zhong, David Schneider, Michael Voit, Rainer Stiefelhagen, J\"urgen Beyerer• 2022

Related benchmarks

TaskDatasetResultRank
Action AnticipationDARai
Anticipation Accuracy25.79
64
Action AnticipationDARai (Coarse)
MoC Accuracy33.82
64
Action AnticipationEPIC-KITCHENS 100 (test)
Overall Action Top-5 Recall18.5
59
Action AnticipationNTURGBD
MoC Accuracy21.27
56
Action AnticipationUTKinects
MoC Accuracy25
56
Action AnticipationDARai Fine-grained
MoC Accuracy0.1726
56
Action AnticipationEpic-Kitchens-100 (val)
mCR@5 (Overall Verb)23.4
33
Action AnticipationEGTEA Gaze+
Top-1 Acc (Verb)53.4
21
Spatial-Temporal AnticipationEgo4D STA v1, v2 (val)
Base Performance (B)40.5
14
Action AnticipationEGTEA Gaze+ (Split 1)
Top-1 Acc (Verb)53.4
9
Showing 10 of 11 rows

Other info

Code

Follow for update