Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Boosting Crowd Counting via Multifaceted Attention

About

This paper focuses on the challenging crowd counting task. As large-scale variations often exist within crowd images, neither fixed-size convolution kernel of CNN nor fixed-size attention of recent vision transformers can well handle this kind of variation. To address this problem, we propose a Multifaceted Attention Network (MAN) to improve transformer models in local spatial relation encoding. MAN incorporates global attention from a vanilla transformer, learnable local attention, and instance attention into a counting model. Firstly, the local Learnable Region Attention (LRA) is proposed to assign attention exclusively for each feature location dynamically. Secondly, we design the Local Attention Regularization to supervise the training of LRA by minimizing the deviation among the attention for different feature locations. Finally, we provide an Instance Attention mechanism to focus on the most important instances dynamically during training. Extensive experiments on four challenging crowd counting datasets namely ShanghaiTech, UCF-QNRF, JHU++, and NWPU have validated the proposed method. Codes: https://github.com/LoraLinH/Boosting-Crowd-Counting-via-Multifaceted-Attention.

Hui Lin, Zhiheng Ma, Rongrong Ji, Yaowei Wang, Xiaopeng Hong• 2022

Related benchmarks

TaskDatasetResultRank
Crowd CountingShanghaiTech Part A (test)
MAE56.8
227
Crowd CountingShanghaiTech Part B (test)
MAE12.5
191
Crowd CountingUCF-QNRF (test)
MAE77.3
95
Crowd CountingJHU-CROWD++ (test)
MAE53.4
39
Crowd CountingUCF-QNRF (Q) (test)
MAE138.8
31
Crowd CountingNWPU 49
MAE76.5
13
Crowd CountingJHU-Crowd++ Street -> Stadium
MAE246.1
8
Crowd CountingJHU-Crowd++ Snow -> Fog/Haze
MAE38.1
8
Crowd CountingJHU-Crowd++ Stadium -> Street (SD -> SR)
MAE45.1
8
Crowd CountingJHU-Crowd++ Fog/Haze -> Snow
MAE445
8
Showing 10 of 10 rows

Other info

Follow for update