Finding Action Tubes with a Sparse-to-Dense Framework
About
The task of spatial-temporal action detection has attracted increasing attention among researchers. Existing dominant methods solve this problem by relying on short-term information and dense serial-wise detection on each individual frames or clips. Despite their effectiveness, these methods showed inadequate use of long-term information and are prone to inefficiency. In this paper, we propose for the first time, an efficient framework that generates action tube proposals from video streams with a single forward pass in a sparse-to-dense manner. There are two key characteristics in this framework: (1) Both long-term and short-term sampled information are explicitly utilized in our spatiotemporal network, (2) A new dynamic feature sampling module (DTS) is designed to effectively approximate the tube output while keeping the system tractable. We evaluate the efficacy of our model on the UCF101-24, JHMDB-21 and UCFSports benchmark datasets, achieving promising results that are competitive to state-of-the-art methods. The proposed sparse-to-dense strategy rendered our framework about 7.6 times more efficient than the nearest competitor.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Action Detection | JHMDB-21 | video-mAP@0.574.3 | 21 | |
| Action Detection | UCF101 24 | video-mAP@0.554 | 13 | |
| Spatio-temporal action detection | UCFSports | mAP@0.5093.8 | 13 | |
| Action Detection | J-HMDB | V-Score (IoU 0.5)74.3 | 10 |