Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Is Space-Time Attention All You Need for Video Understanding?

About

We present a convolution-free approach to video classification built exclusively on self-attention over space and time. Our method, named "TimeSformer," adapts the standard Transformer architecture to video by enabling spatiotemporal feature learning directly from a sequence of frame-level patches. Our experimental study compares different self-attention schemes and suggests that "divided attention," where temporal attention and spatial attention are separately applied within each block, leads to the best video classification accuracy among the design choices considered. Despite the radically new design, TimeSformer achieves state-of-the-art results on several action recognition benchmarks, including the best reported accuracy on Kinetics-400 and Kinetics-600. Finally, compared to 3D convolutional networks, our model is faster to train, it can achieve dramatically higher test efficiency (at a small drop in accuracy), and it can also be applied to much longer video clips (over one minute long). Code and models are available at: https://github.com/facebookresearch/TimeSformer.

Gedas Bertasius, Heng Wang, Lorenzo Torresani• 2021

Related benchmarks

TaskDatasetResultRank
Action RecognitionNTU RGB+D 120 (X-set)
Accuracy91.6
661
Action RecognitionNTU RGB+D 60 (Cross-View)
Accuracy97.2
575
Action RecognitionSomething-Something v2 (val)
Top-1 Accuracy62.5
535
Action RecognitionKinetics-400
Top-1 Acc80.7
413
Action RecognitionUCF101 (mean of 3 splits)
Accuracy92
357
Action RecognitionSomething-Something v2
Top-1 Accuracy62.5
341
Action RecognitionSomething-Something v2 (test)
Top-1 Acc62.5
333
Action RecognitionNTU RGB-D Cross-Subject 60
Accuracy93
305
Action RecognitionKinetics 400 (test)
Top-1 Accuracy80.7
245
Video ClassificationKinetics 400 (val)
Top-1 Acc80.7
204
Showing 10 of 132 rows
...

Other info

Code

Follow for update