Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Object-Region Video Transformers

About

Recently, video transformers have shown great success in video understanding, exceeding CNN performance; yet existing video transformer models do not explicitly model objects, although objects can be essential for recognizing actions. In this work, we present Object-Region Video Transformers (ORViT), an \emph{object-centric} approach that extends video transformer layers with a block that directly incorporates object representations. The key idea is to fuse object-centric representations starting from early layers and propagate them into the transformer-layers, thus affecting the spatio-temporal representations throughout the network. Our ORViT block consists of two object-level streams: appearance and dynamics. In the appearance stream, an "Object-Region Attention" module applies self-attention over the patches and \emph{object regions}. In this way, visual object regions interact with uniform patch tokens and enrich them with contextualized object information. We further model object dynamics via a separate "Object-Dynamics Module", which captures trajectory interactions, and show how to integrate the two streams. We evaluate our model on four tasks and five datasets: compositional and few-shot action recognition on SomethingElse, spatio-temporal action detection on AVA, and standard action recognition on Something-Something V2, Diving48 and Epic-Kitchen100. We show strong performance improvement across all tasks and datasets considered, demonstrating the value of a model that incorporates object representations into a transformer architecture. For code and pretrained models, visit the project page at \url{https://roeiherz.github.io/ORViT/}

Roei Herzig, Elad Ben-Avraham, Karttikeya Mangalam, Amir Bar, Gal Chechik, Anna Rohrbach, Trevor Darrell, Amir Globerson• 2021

Related benchmarks

TaskDatasetResultRank
Action RecognitionSomething-Something v2 (val)
Top-1 Accuracy69.5
535
Action RecognitionSomething-Something v2
Top-1 Accuracy67.9
341
Action RecognitionSomething-Something v2 (test val)
Top-1 Accuracy69.5
187
Action RecognitionEPIC-KITCHENS 100 (test)
Top-1 Verb Acc68.4
101
Action RecognitionSSV2
Top-1 Acc67.9
93
Action RecognitionDiving-48
Top-1 Acc88
82
Action RecognitionDiving-48 (test)
Top-1 Acc88
81
Action AnticipationEPIC-KITCHENS 100 (test)
Overall Action Top-5 Recall21.53
59
Video ClassificationSomething-Something v2
Top-1 Acc69.5
56
Video Action ClassificationDiving-48
Top-1 Acc88
53
Showing 10 of 28 rows

Other info

Code

Follow for update