Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Unsupervised Learning of Disentangled Representations from Video

About

We present a new model DrNET that learns disentangled image representations from video. Our approach leverages the temporal coherence of video and a novel adversarial loss to learn a representation that factorizes each frame into a stationary part and a temporally varying component. The disentangled representation can be used for a range of tasks. For example, applying a standard LSTM to the time-vary components enables prediction of future frames. We evaluate our approach on a range of synthetic and real videos, demonstrating the ability to coherently generate hundreds of steps into the future.

Remi Denton, Vighnesh Birodkar• 2017

Related benchmarks

TaskDatasetResultRank
3D Human Pose EstimationMPI-INF-3DHP (test)--
559
Video PredictionMoving MNIST (test)
MSE45.2
82
Video PredictionColoured dSprites (test)
MSE15.2
5
Video PredictionSprites (test)
MSE94.4
5
Disentangled Representation LearningSprites (test)
Gender Accuracy80.5
4
Disentangled Representation LearningColoured dSprites (test)
Shape Accuracy95.7
4
Showing 6 of 6 rows

Other info

Follow for update