Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

End-to-End Spatio-Temporal Action Localisation with Video Transformers

About

The most performant spatio-temporal action localisation models use external person proposals and complex external memory banks. We propose a fully end-to-end, purely-transformer based model that directly ingests an input video, and outputs tubelets -- a sequence of bounding boxes and the action classes at each frame. Our flexible model can be trained with either sparse bounding-box supervision on individual frames, or full tubelet annotations. And in both cases, it predicts coherent tubelets as the output. Moreover, our end-to-end model requires no additional pre-processing in the form of proposals, or post-processing in terms of non-maximal suppression. We perform extensive ablation experiments, and significantly advance the state-of-the-art results on four different spatio-temporal action localisation benchmarks with both sparse keyframes and full tubelet annotations.

Alexey Gritsenko, Xuehan Xiong, Josip Djolonga, Mostafa Dehghani, Chen Sun, Mario Lu\v{c}i\'c, Cordelia Schmid, Anurag Arnab• 2023

Related benchmarks

TaskDatasetResultRank
Spatiotemporal Action LocalizationAVA 2.2
mAP41.7
21
Spatio-temporal Action LocalizationUCF101 24
Video-mAP (IoU=0.2)88
20
Spatio-temporal Action LocalizationJ-HMDB-21
Video mAP (IoU=0.2)93.1
15
Action DetectionUCF-101-24 (test)
F1 Score (IoU=0.5)90.3
15
Spatio-temporal Action LocalizationAVA-Kinetics v1.0
mAP44.6
10
Action DetectionJHMDB closed-set
F@0.592.1
7
Action DetectionMultiSports closed-set
F1 Score @ IoU 0.559.3
3
Showing 7 of 7 rows

Other info

Follow for update