Hidden Two-Stream Convolutional Networks for Action Recognition
About
Analyzing videos of human actions involves understanding the temporal relationships among video frames. State-of-the-art action recognition approaches rely on traditional optical flow estimation methods to pre-compute motion information for CNNs. Such a two-stage approach is computationally expensive, storage demanding, and not end-to-end trainable. In this paper, we present a novel CNN architecture that implicitly captures motion information between adjacent frames. We name our approach hidden two-stream CNNs because it only takes raw video frames as input and directly predicts action classes without explicitly computing optical flow. Our end-to-end approach is 10x faster than its two-stage baseline. Experimental results on four challenging action recognition datasets: UCF101, HMDB51, THUMOS14 and ActivityNet v1.2 show that our approach significantly outperforms the previous best real-time approaches.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Action Recognition | UCF101 | Accuracy97.1 | 365 | |
| Action Recognition | HMDB51 | 3-Fold Accuracy78.7 | 191 | |
| Action Recognition | UCF101 (Split 1) | -- | 105 | |
| Action Recognition | HMDB51 (split 1) | Top-1 Acc78.7 | 75 | |
| Action Recognition | ActivityNet | Accuracy91.2 | 22 | |
| Action Recognition | THUMOS 14 | Mean Accuracy80.6 | 8 |