Two-Stream temporal transformer for video action classification
About
Motion representation plays an important role in video understanding and has many applications including action recognition, robot and autonomous guidance or others. Lately, transformer networks, through their self-attention mechanism capabilities, have proved their efficiency in many applications. In this study, we introduce a new two-stream transformer video classifier, which extracts spatio-temporal information from content and optical flow representing movement information. The proposed model identifies self-attention features across the joint optical flow and temporal frame domain and represents their relationships within the transformer encoder mechanism. The experimental results show that our proposed methodology provides excellent classification results on three well-known video datasets of human activities.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Video Classification | Something-Something v2 (test) | Top-1 Acc0.5638 | 169 | |
| Video Action Recognition | HMDB51 (avg over all splits) | Top-1 Acc83.39 | 56 | |
| Video Classification | UCF101 (averaged over three splits) | Accuracy93.54 | 39 |