Hierarchical Self-supervised Representation Learning for Movie Understanding
About
Most self-supervised video representation learning approaches focus on action recognition. In contrast, in this paper we focus on self-supervised video learning for movie understanding and propose a novel hierarchical self-supervised pretraining strategy that separately pretrains each level of our hierarchical movie understanding model (based on [37]). Specifically, we propose to pretrain the low-level video backbone using a contrastive learning objective, while pretrain the higher-level video contextualizer using an event mask prediction task, which enables the usage of different data sources for pretraining different levels of the hierarchy. We first show that our self-supervised pretraining strategies are effective and lead to improved performance on all tasks and metrics on VidSitu benchmark [37] (e.g., improving on semantic role prediction from 47% to 61% CIDEr scores). We further demonstrate the effectiveness of our contextualized event features on LVU tasks [54], both when used alone and when combined with instance features, showing their complementarity.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Semantic Role Prediction | VidSitu (test) | CIDEr61.18 | 17 | |
| Long-form Video Understanding | LVU (test) | Relation Top-1 Acc52.38 | 16 | |
| Event relation prediction | VidSitu | Mean Accuracy35.32 | 12 | |
| Long Video Understanding (Classification & Regression) | LVU 53 (test) | Place Accuracy44.1 | 10 | |
| Verb prediction | VidSitu (test) | Accuracy@144.67 | 7 |