End-to-End Spatio-Temporal Action Localisation with Video Transformers
About
The most performant spatio-temporal action localisation models use external person proposals and complex external memory banks. We propose a fully end-to-end, purely-transformer based model that directly ingests an input video, and outputs tubelets -- a sequence of bounding boxes and the action classes at each frame. Our flexible model can be trained with either sparse bounding-box supervision on individual frames, or full tubelet annotations. And in both cases, it predicts coherent tubelets as the output. Moreover, our end-to-end model requires no additional pre-processing in the form of proposals, or post-processing in terms of non-maximal suppression. We perform extensive ablation experiments, and significantly advance the state-of-the-art results on four different spatio-temporal action localisation benchmarks with both sparse keyframes and full tubelet annotations.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Spatiotemporal Action Localization | AVA 2.2 | mAP41.7 | 21 | |
| Spatio-temporal Action Localization | UCF101 24 | Video-mAP (IoU=0.2)88 | 20 | |
| Spatio-temporal Action Localization | J-HMDB-21 | Video mAP (IoU=0.2)93.1 | 15 | |
| Action Detection | UCF-101-24 (test) | F1 Score (IoU=0.5)90.3 | 15 | |
| Spatio-temporal Action Localization | AVA-Kinetics v1.0 | mAP44.6 | 10 | |
| Action Detection | JHMDB closed-set | F@0.592.1 | 7 | |
| Action Detection | MultiSports closed-set | F1 Score @ IoU 0.559.3 | 3 |