Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

GaitGL: Learning Discriminative Global-Local Feature Representations for Gait Recognition

About

Existing gait recognition methods either directly establish Global Feature Representation (GFR) from original gait sequences or generate Local Feature Representation (LFR) from several local parts. However, GFR tends to neglect local details of human postures as the receptive fields become larger in the deeper network layers. Although LFR allows the network to focus on the detailed posture information of each local region, it neglects the relations among different local parts and thus only exploits limited local information of several specific regions. To solve these issues, we propose a global-local based gait recognition network, named GaitGL, to generate more discriminative feature representations. To be specific, a novel Global and Local Convolutional Layer (GLCL) is developed to take full advantage of both global visual information and local region details in each layer. GLCL is a dual-branch structure that consists of a GFR extractor and a mask-based LFR extractor. GFR extractor aims to extract contextual information, e.g., the relationship among various body parts, and the mask-based LFR extractor is presented to exploit the detailed posture changes of local regions. In addition, we introduce a novel mask-based strategy to improve the local feature extraction capability. Specifically, we design pairs of complementary masks to randomly occlude feature maps, and then train our mask-based LFR extractor on various occluded feature maps. In this manner, the LFR extractor will learn to fully exploit local information. Extensive experiments demonstrate that GaitGL achieves better performance than state-of-the-art gait recognition methods. The average rank-1 accuracy on CASIA-B, OU-MVLP, GREW and Gait3D is 93.6%, 98.7%, 68.0% and 63.8%, respectively, significantly outperforming the competing methods. The proposed method has won the first prize in two competitions: HID 2020 and HID 2021.

Beibei Lin, Shunli Zhang, Ming Wang, Lincheng Li, Xin Yu• 2022

Related benchmarks

TaskDatasetResultRank
Gait RecognitionSUSTech1K 1.0 (test)
Accuracy (NM)67.1
19
Gait RecognitionCCPG (UP)
Rank-1 Accuracy76.2
18
Gait RecognitionCCPG
Rank-1 Accuracy68.3
18
Gait RecognitionCCPG BG
Rank-1 Accuracy76.7
18
Gait RecognitionCCPG Mean
Rank-1 Accuracy72.1
18
Gait RecognitionCCPG DN
Rank-1 Accuracy67
18
Gait RecognitionBarbieGait THK1
R144.7
15
Gait RecognitionBarbieGait THK2
R137.5
15
Gait RecognitionBarbieGait THK4
R131.2
15
Gait RecognitionBarbieGait THK9
Rank-1 Accuracy13.8
15
Showing 10 of 16 rows

Other info

Follow for update