Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

AssembleNet++: Assembling Modality Representations via Attention Connections

About

We create a family of powerful video models which are able to: (i) learn interactions between semantic object information and raw appearance and motion features, and (ii) deploy attention in order to better learn the importance of features at each convolutional block of the network. A new network component named peer-attention is introduced, which dynamically learns the attention weights using another block or input modality. Even without pre-training, our models outperform the previous work on standard public activity recognition datasets with continuous videos, establishing new state-of-the-art. We also confirm that our findings of having neural connections from the object modality and the use of peer-attention is generally applicable for different existing architectures, improving their performances. We name our model explicitly as AssembleNet++. The code will be available at: https://sites.google.com/corp/view/assemblenet/

Michael S. Ryoo, AJ Piergiovanni, Juhana Kangaspunta, Anelia Angelova• 2020

Related benchmarks

TaskDatasetResultRank
Action RecognitionCharades
mAP0.567
64
Action ClassificationSmarthome (cross-subject)
Accuracy63.6
58
Action RecognitionToyota Smarthome CS
Accuracy63.6
58
Action RecognitionCharades (test)
mAP0.598
53
Video ClassificationCharades
mAP59.8
38
Multi-label video classificationCharades 12 fps setting (test)
mAP55
15
Activity RecognitionToyota Smarthome
Mean Per-Class Accuracy63.64
8
Showing 7 of 7 rows

Other info

Code

Follow for update