Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

PoseRAC: Pose Saliency Transformer for Repetitive Action Counting

About

This paper presents a significant contribution to the field of repetitive action counting through the introduction of a new approach called Pose Saliency Representation. The proposed method efficiently represents each action using only two salient poses instead of redundant frames, which significantly reduces the computational cost while improving the performance. Moreover, we introduce a pose-level method, PoseRAC, which is based on this representation and achieves state-of-the-art performance on two new version datasets by using Pose Saliency Annotation to annotate salient poses for training. Our lightweight model is highly efficient, requiring only 20 minutes for training on a GPU, and infers nearly 10x faster compared to previous methods. In addition, our approach achieves a substantial improvement over the previous state-of-the-art TransRAC, achieving an OBO metric of 0.56 compared to 0.29 of TransRAC. The code and new dataset are available at https://github.com/MiracleDance/PoseRAC for further research and experimentation, making our proposed approach highly accessible to the research community.

Ziyu Yao, Xuxin Cheng, Yuexian Zou• 2023

Related benchmarks

TaskDatasetResultRank
Repetitive Action CountingRepCount-pose (test)
MAE0.236
8
Repetitive Action CountingUCFRep-pose (test)
MAE31.2
8
Repetition CountingMo-RepCount
OBO0.382
5
Showing 3 of 3 rows

Other info

Code

Follow for update