Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Discovering Spatio-Temporal Action Tubes

About

In this paper, we address the challenging problem of spatial and temporal action detection in videos. We first develop an effective approach to localize frame-level action regions through integrating static and kinematic information by the early- and late-fusion detection scheme. With the intention of exploring important temporal connections among the detected action regions, we propose a tracking-by-point-matching algorithm to stitch the discrete action regions into a continuous spatio-temporal action tube. Recurrent 3D convolutional neural network is used to predict action categories and determine temporal boundaries of the generated tubes. We then introduce an action footprint map to refine the candidate tubes based on the action-specific spatial characteristics preserved in the convolutional layers of R3DCNN. In the extensive experiments, our method achieves superior detection results on the three public benchmark datasets: UCFSports, J-HMDB and UCF101.

Yuancheng Ye, Xiaodong Yang, Yingli Tian• 2018

Related benchmarks

TaskDatasetResultRank
Video-level Action DetectionUCF101
mAP (IoU=0.2)76.2
8
Frame-level Action DetectionUCF101
frame-mAP (IoU=0.5)67
5
Showing 2 of 2 rows

Other info

Follow for update