Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Approximated Bilinear Modules for Temporal Modeling

About

We consider two less-emphasized temporal properties of video: 1. Temporal cues are fine-grained; 2. Temporal modeling needs reasoning. To tackle both problems at once, we exploit approximated bilinear modules (ABMs) for temporal modeling. There are two main points making the modules effective: two-layer MLPs can be seen as a constraint approximation of bilinear operations, thus can be used to construct deep ABMs in existing CNNs while reusing pretrained parameters; frame features can be divided into static and dynamic parts because of visual repetition in adjacent frames, which enables temporal modeling to be more efficient. Multiple ABM variants and implementations are investigated, from high performance to high efficiency. Specifically, we show how two-layer subnets in CNNs can be converted to temporal bilinear modules by adding an auxiliary-branch. Besides, we introduce snippet sampling and shifting inference to boost sparse-frame video classification performance. Extensive ablation studies are conducted to show the effectiveness of proposed techniques. Our models can outperform most state-of-the-art methods on Something-Something v1 and v2 datasets without Kinetics pretraining, and are also competitive on other YouTube-like action recognition datasets. Our code is available on https://github.com/zhuxinqimac/abm-pytorch.

Xinqi Zhu, Chang Xu, Langwen Hui, Cewu Lu, Dacheng Tao• 2020

Related benchmarks

TaskDatasetResultRank
Action RecognitionSomething-Something v2 (test)
Top-1 Acc61.2
333
Action RecognitionSomething-something v1 (test)
Top-1 Accuracy49.8
189
Showing 2 of 2 rows

Other info

Follow for update