Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Spatial Transformer Network on Skeleton-based Gait Recognition

About

Skeleton-based gait recognition models usually suffer from the robustness problem, as the Rank-1 accuracy varies from 90\% in normal walking cases to 70\% in walking with coats cases. In this work, we propose a state-of-the-art robust skeleton-based gait recognition model called Gait-TR, which is based on the combination of spatial transformer frameworks and temporal convolutional networks. Gait-TR achieves substantial improvements over other skeleton-based gait models with higher accuracy and better robustness on the well-known gait dataset CASIA-B. Particularly in walking with coats cases, Gait-TR get a 90\% Rank-1 gait recognition accuracy rate, which is higher than the best result of silhouette-based models, which usually have higher accuracy than the silhouette-based gait recognition models. Moreover, our experiment on CASIA-B shows that the spatial transformer can extract gait features from the human skeleton better than the widely used graph convolutional network.

Cun Zhang, Xing-Peng Chen, Guo-Qiang Han, Xiang-Jie Liu• 2022

Related benchmarks

TaskDatasetResultRank
Gait RecognitionCASIA-B (test)
Rank-1 Accuracy (NM)94.7
54
Gait RecognitionCCPG
CL15.7
32
Gait RecognitionSUSTech1K (test)
Rank-1 Accuracy (Clothing)21
32
Gait RecognitionSUSTech1K
Rank-5 Acc56
30
Gait RecognitionCCPG (test)
Rank-1 Accuracy (CL)24.3
23
Gait RecognitionSUSTech1K (Probe Sequence)
Rank-1 Accuracy (Normal)33.3
16
Gait RecognitionBarbieGait THK1
R163.3
15
Gait RecognitionBarbieGait THK2
R159.2
15
Gait RecognitionBarbieGait THK3
R1 Accuracy58.7
15
Gait RecognitionBarbieGait THK4
R158.1
15
Showing 10 of 17 rows

Other info

Follow for update