Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Mutual Modality Learning for Video Action Classification

About

The construction of models for video action classification progresses rapidly. However, the performance of those models can still be easily improved by ensembling with the same models trained on different modalities (e.g. Optical flow). Unfortunately, it is computationally expensive to use several modalities during inference. Recent works examine the ways to integrate advantages of multi-modality into a single RGB-model. Yet, there is still a room for improvement. In this paper, we explore the various methods to embed the ensemble power into a single model. We show that proper initialization, as well as mutual modality learning, enhances single-modality models. As a result, we achieve state-of-the-art results in the Something-Something-v2 benchmark.

Stepan Komkov, Maksim Dzabraev, Aleksandr Petiushko• 2020

Related benchmarks

TaskDatasetResultRank
Action RecognitionSomething-Something v2 (val)
Top-1 Accuracy69.07
535
Action RecognitionSomething-Something v2 (test)
Top-1 Acc69.02
333
Video ClassificationSomething-Something v2
Top-1 Acc69.1
56
Video RecognitionCharades--
11
Showing 4 of 4 rows

Other info

Code

Follow for update