Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Unsupervised Learning of Long-Term Motion Dynamics for Videos

About

We present an unsupervised representation learning approach that compactly encodes the motion dependencies in videos. Given a pair of images from a video clip, our framework learns to predict the long-term 3D motions. To reduce the complexity of the learning framework, we propose to describe the motion as a sequence of atomic 3D flows computed with RGB-D modality. We use a Recurrent Neural Network based Encoder-Decoder framework to predict these sequences of flows. We argue that in order for the decoder to reconstruct these sequences, the encoder must learn a robust video representation that captures long-term motion dependencies and spatial-temporal relations. We demonstrate the effectiveness of our learned temporal representations on activity classification across multiple modalities and datasets such as NTU RGB+D and MSR Daily Activity 3D. Our framework is generic to any input modality, i.e., RGB, Depth, and RGB-D videos.

Zelun Luo, Boya Peng, De-An Huang, Alexandre Alahi, Li Fei-Fei• 2017

Related benchmarks

TaskDatasetResultRank
Action RecognitionNTU RGB+D 60 (Cross-View)
Accuracy53.2
575
Action RecognitionNTU RGB+D (Cross-subject)
Accuracy66.2
474
Action RecognitionUCF101 (test)
Accuracy53
307
Action RecognitionNTU RGB-D Cross-Subject 60
Accuracy61.4
305
Skeleton-based Action RecognitionNTU 60 (X-sub)
Accuracy61.4
220
Skeleton-based Action RecognitionNTU 60 (X-view)
Accuracy53.2
119
Skeleton-based Action RecognitionNW-UCLA
Accuracy50.7
44
Action RecognitionN-UCLA
Accuracy50.7
36
Action RecognitionUCF101 (train)
Accuracy53
12
Showing 9 of 9 rows

Other info

Follow for update