Unsupervised Learning of Long-Term Motion Dynamics for Videos
About
We present an unsupervised representation learning approach that compactly encodes the motion dependencies in videos. Given a pair of images from a video clip, our framework learns to predict the long-term 3D motions. To reduce the complexity of the learning framework, we propose to describe the motion as a sequence of atomic 3D flows computed with RGB-D modality. We use a Recurrent Neural Network based Encoder-Decoder framework to predict these sequences of flows. We argue that in order for the decoder to reconstruct these sequences, the encoder must learn a robust video representation that captures long-term motion dependencies and spatial-temporal relations. We demonstrate the effectiveness of our learned temporal representations on activity classification across multiple modalities and datasets such as NTU RGB+D and MSR Daily Activity 3D. Our framework is generic to any input modality, i.e., RGB, Depth, and RGB-D videos.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Action Recognition | NTU RGB+D 60 (Cross-View) | Accuracy53.2 | 575 | |
| Action Recognition | NTU RGB+D (Cross-subject) | Accuracy66.2 | 474 | |
| Action Recognition | UCF101 (test) | Accuracy53 | 307 | |
| Action Recognition | NTU RGB-D Cross-Subject 60 | Accuracy61.4 | 305 | |
| Skeleton-based Action Recognition | NTU 60 (X-sub) | Accuracy61.4 | 220 | |
| Skeleton-based Action Recognition | NTU 60 (X-view) | Accuracy53.2 | 119 | |
| Skeleton-based Action Recognition | NW-UCLA | Accuracy50.7 | 44 | |
| Action Recognition | N-UCLA | Accuracy50.7 | 36 | |
| Action Recognition | UCF101 (train) | Accuracy53 | 12 |