Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Interaction Relational Network for Mutual Action Recognition

About

Person-person mutual action recognition (also referred to as interaction recognition) is an important research branch of human activity analysis. Current solutions in the field -- mainly dominated by CNNs, GCNs and LSTMs -- often consist of complicated architectures and mechanisms to embed the relationships between the two persons on the architecture itself, to ensure the interaction patterns can be properly learned. Our main contribution with this work is by proposing a simpler yet very powerful architecture, named Interaction Relational Network, which utilizes minimal prior knowledge about the structure of the human body. We drive the network to identify by itself how to relate the body parts from the individuals interacting. In order to better represent the interaction, we define two different relationships, leading to specialized architectures and models for each. These multiple relationship models will then be fused into a single and special architecture, in order to leverage both streams of information for further enhancing the relational reasoning capability. Furthermore we define important structured pair-wise operations to extract meaningful extra information from each pair of joints -- distance and motion. Ultimately, with the coupling of an LSTM, our IRN is capable of paramount sequential relational reasoning. These important extensions we made to our network can also be valuable to other problems that require sophisticated relational reasoning. Our solution is able to achieve state-of-the-art performance on the traditional interaction recognition datasets SBU and UT, and also on the mutual actions from the large-scale dataset NTU RGB+D. Furthermore, it obtains competitive performance in the NTU RGB+D 120 dataset interactions subset.

Mauricio Perez, Jun Liu, Alex C. Kot• 2019

Related benchmarks

TaskDatasetResultRank
Action RecognitionNTU RGB+D 120 (X-set)
Accuracy79.6
661
Action RecognitionNTU RGB+D 120 Cross-Subject
Accuracy77.7
183
Skeleton-based Action RecognitionNTU 120 (X-sub)
Accuracy77.7
139
Skeleton-based Action RecognitionNTU-RGB+D 120 (Cross-setup)
Accuracy79.6
136
Human Interaction RecognitionSBU Interaction Dataset (test)
Accuracy98.2
14
Interaction RecognitionNTU RGB+D 120 (X-set)
Accuracy79.6
13
Human Interaction RecognitionUT-Interaction (UT-1)
Accuracy98.3
12
Interaction RecognitionNTU-RGB+D (X-Sub)
Accuracy90.5
10
Interaction RecognitionNTU-RGB+D (X-View)
Accuracy93.5
10
Interaction RecognitionNTU-RGB+D 120 (X-Sub)
Accuracy77.7
10
Showing 10 of 14 rows

Other info

Follow for update