Part-based Graph Convolutional Network for Action Recognition
About
Human actions comprise of joint motion of articulated body parts or `gestures'. Human skeleton is intuitively represented as a sparse graph with joints as nodes and natural connections between them as edges. Graph convolutional networks have been used to recognize actions from skeletal videos. We introduce a part-based graph convolutional network (PB-GCN) for this task, inspired by Deformable Part-based Models (DPMs). We divide the skeleton graph into four subgraphs with joints shared across them and learn a recognition model using a part-based graph convolutional network. We show that such a model improves performance of recognition, compared to a model using entire skeleton graph. Instead of using 3D joint coordinates as node features, we show that using relative coordinates and temporal displacements boosts performance. Our model achieves state-of-the-art performance on two challenging benchmark datasets NTURGB+D and HDM05, for skeletal action recognition.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Action Recognition | NTU RGB+D (Cross-View) | Accuracy93.2 | 609 | |
| Action Recognition | NTU RGB+D 60 (Cross-View) | Accuracy93.2 | 575 | |
| Action Recognition | NTU RGB+D (Cross-subject) | Accuracy87.5 | 474 | |
| Action Recognition | NTU RGB-D Cross-Subject 60 | Accuracy87.5 | 305 | |
| Skeleton-based Action Recognition | NTU RGB+D (Cross-View) | Accuracy93.4 | 213 | |
| Skeleton-based Action Recognition | NTU RGB+D (Cross-subject) | Accuracy87.5 | 123 | |
| Action Recognition | NTU RGB+D v1 (Cross-Subject (CS)) | Accuracy87.5 | 50 | |
| Action Recognition | HDM05 (10-fold cross sample val) | Accuracy88.17 | 7 |